(Created page with " == Abstract == The visualization of large, remotely located data sets necessitates the development of a distributed computing pipeline in order to reduce the data, in stage...")
 
m (Scipediacontent moved page Draft Content 922378561 to Bowman et al 2004a)
(No difference)

Revision as of 14:49, 26 October 2020

Abstract

The visualization of large, remotely located data sets necessitates the development of a distributed computing pipeline in order to reduce the data, in stages, to a manageable size. The required baseline infrastructure for launching such a distributed pipeline is becoming available, but few services support even marginally optimal resource selection and partitioning of the data analysis workflow. We explore a methodology for building a model of overall application performance using a composition of the analytic models of individual components that comprise the pipeline. The analytic models are shown to be accurate on a testbed of distributed heterogeneous systems. The prediction methodology will form the foundation of a more robust resource management service for future Grid-based visualization applications.

Document type: Report

Full document

The PDF file did not load properly or your web browser does not support viewing PDF files. Download directly to your device: Download PDF document

Original document

The different versions of the original document can be found in:

Back to Top

Document information

Published on 01/01/2004

Volume 2004, 2004
DOI: 10.2172/841324
Licence: CC BY-NC-SA license

Document Score

0

Views 0
Recommendations 0

Share this document

claim authorship

Are you one of the authors of this document?