Abstract

Harvesting provenance for streaming workflows presents challenges related to the high rate of the updates and a large distribution of the execution, which can be spread across several institutional infrastructures. Moreover, the typically large volume of data produced by each transformation step can not be always stored and preserved efficiently. This can represent an obstacle for the evaluation of the results, for instance, in real-time, suggesting the importance of customisable metadata extraction procedures. In this paper we present our approach to the aforementioned provenance challenges within a use-case driven scenario in the field of seismology, which requires the execution of processing pipelines over a large datastream. In particular, we will discuss the current implementation and the upcoming challenges for an in-worfklow programmatic approach to provenance tracing, building on composite functions, selective recording and domain specific metadata production.


Original document

The different versions of the original document can be found in:

https://dblp.uni-trier.de/db/conf/edbt/edbtw2013.html#SpinusoCA13,
https://academic.microsoft.com/#/detail/2010205794
http://dx.doi.org/10.1145/2457317.2457369
Back to Top

Document information

Published on 31/12/12
Accepted on 31/12/12
Submitted on 31/12/12

Volume 2013, 2013
DOI: 10.1145/2457317.2457369
Licence: CC BY-NC-SA license

Document Score

0

Views 1
Recommendations 0

Share this document

claim authorship

Are you one of the authors of this document?