Abstract

Big Data Pipelines decompose complex analyses of large data sets into a series of simpler tasks, with independently tuned components for each task. This modular setup allows re-use of components across several different pipelines. However, the interaction of independently tuned pipeline components yields poor end-to-end performance as errors introduced by one component cascade through the whole pipeline, affecting overall accuracy. We propose a novel model for reasoning across components of Big Data Pipelines in a probabilistically well-founded manner. Our key idea is to view the interaction of components as dependencies on an underlying graphical model. Different message passing schemes on this graphical model provide various inference algorithms to trade-off end-to-end performance and computational cost. We instantiate our framework with an efficient beam search algorithm, and demonstrate its efficiency on two Big Data Pipelines: parsing and relation extraction.


Original document

The different versions of the original document can be found in:

http://www.cs.cornell.edu/%7Eadith/docs/Topk.pdf,
https://dl.acm.org/citation.cfm?id=2487588,
https://core.ac.uk/display/22731250,
https://academic.microsoft.com/#/detail/1987331701
http://dx.doi.org/10.1145/2487575.2487588
Back to Top

Document information

Published on 01/01/2013

Volume 2013, 2013
DOI: 10.1145/2487575.2487588
Licence: CC BY-NC-SA license

Document Score

0

Views 0
Recommendations 0

Share this document

Keywords

claim authorship

Are you one of the authors of this document?