Abstract

"Next generation" data acquisition technologies are allowing scientists to collect exponentially more data at a lower cost. These trends are broadly impacting many scientific fields, including genomics, astronomy, and neuroscience. We can attack the problem caused by exponential data growth by applying horizontally scalable techniques from current analytics systems to accelerate scientific processing pipelines. In this paper, we describe ADAM, an example genomics pipeline that leverages the open-source Apache Spark and Parquet systems to achieve a 28x speedup over current genomics pipelines, while reducing cost by 63%. From building this system, we were able to distill a set of techniques for implementing scientific analyses efficiently using commodity "big data" systems. To demonstrate the generality of our architecture, we then implement a scalable astronomy image processing system which achieves a 2.8--8.9x improvement over the state-of-the-art MPI-based system.


Original document

The different versions of the original document can be found in:

https://amplab.cs.berkeley.edu/wp-content/uploads/2015/03/adam.pdf,
https://dl.acm.org/citation.cfm?id=2742787,
https://dl.acm.org/citation.cfm?id=2723372.2742787,
http://fnothaft.net/docs/papers/adam-sigmod-2015.pdf,
https://doi.org/10.1145/2723372.2742787,
https://academic.microsoft.com/#/detail/1989017925
http://dx.doi.org/10.1145/2723372.2742787 under the license http://www.acm.org/publications/policies/copyright_policy#Background
Back to Top

Document information

Published on 01/01/2015

Volume 2015, 2015
DOI: 10.1145/2723372.2742787
Licence: Other

Document Score

0

Views 1
Recommendations 0

Share this document

Keywords

claim authorship

Are you one of the authors of this document?