Abstract

Many decision-making queries are based on aggregating massive amounts of data, where sampling is an important approximation technique for reducing execution times. It is important to estimate error bounds when sampling to help users balance between precision and performance. However, error bound estimation is challenging because data processing pipelines often transform the input dataset in complex ways before computing the final aggregated values. In this paper, we introduce a sampling framework to support approximate computing with estimated error bounds in Spark. Our framework allows sampling to be performed at multiple arbitrary points within a sequence of transformations preceding an aggregation operation. The framework constructs a data provenance tree to maintain information about how transformations are clustering output data items to be aggregated. It then uses the tree and multi-stage sampling theories to compute the approximate aggregate values and corresponding error bounds. When information about output keys are available early, the framework can also use adaptive stratified reservoir sampling to avoid (or reduce) key losses in the final output and to achieve more consistent error bounds across popular and rare keys. Finally, the framework includes an algorithm to dynamically choose sampling rates to meet user-specified constraints on the CDF of error bounds in the outputs. We have implemented a prototype of our framework called ApproxSpark and used it to implement five approximate applications from different domains. Evaluation results show that ApproxSpark can (a) significantly reduce execution time if users can tolerate small amounts of uncertainties and, in many cases, loss of rare keys, and (b) automatically find sampling rates to meet user-specified constraints on error bounds. We also explore and discuss extensively tradeoffs between sampling rates, execution time, accuracy and key loss.


Original document

The different versions of the original document can be found in:

http://dx.doi.org/10.1109/mascots.2019.00017
https://arxiv.org/abs/1812.01823,
http://arxiv.org/pdf/1812.01823.pdf,
https://academic.microsoft.com/#/detail/2975606313


DOIS: 10.7282/t3zc86ht 10.7282/t3-zrrv-jn27 10.7282/t3348q1k 10.1109/mascots.2019.00017 10.7282/t3cn77js

Back to Top

Document information

Published on 01/01/2018

Volume 2018, 2018
DOI: 10.7282/t3zc86ht
Licence: CC BY-NC-SA license

Document Score

0

Views 0
Recommendations 0

Share this document

claim authorship

Are you one of the authors of this document?