m (Scipediacontent moved page Draft Content 151831413 to Laney Childs 2009a)
 
Line 3: Line 3:
  
 
We conducted a feasibility study to research modifications to data-flow architectures to enable data-flow to be distributed across multiple machines automatically. Distributed data-flow is a crucial technology to ensure that tools like the VisIt visualization application can provide in-situ data analysis and post-processing for simulations on peta-scale machines. We modified a version of VisIt to study load-balancing trade-offs between light-weight kernel compute environments and dedicated post-processing cluster nodes. Our research focused on memory overheads for contouring operations, which involves variable amounts of generated geometry on each node and computation of normal vectors for all generated vertices. Each compute node independently decided whether to send data to dedicated post-processing nodes at each stage of pipeline execution, depending on available memory. We instrumented the code to allow user settable available memory amounts to test extremely low-overhead compute environments. We performed initial testing of this prototype distributed streaming framework, but did not have time to perform scaling studies at and beyond 1000 compute-nodes.
 
We conducted a feasibility study to research modifications to data-flow architectures to enable data-flow to be distributed across multiple machines automatically. Distributed data-flow is a crucial technology to ensure that tools like the VisIt visualization application can provide in-situ data analysis and post-processing for simulations on peta-scale machines. We modified a version of VisIt to study load-balancing trade-offs between light-weight kernel compute environments and dedicated post-processing cluster nodes. Our research focused on memory overheads for contouring operations, which involves variable amounts of generated geometry on each node and computation of normal vectors for all generated vertices. Each compute node independently decided whether to send data to dedicated post-processing nodes at each stage of pipeline execution, depending on available memory. We instrumented the code to allow user settable available memory amounts to test extremely low-overhead compute environments. We performed initial testing of this prototype distributed streaming framework, but did not have time to perform scaling studies at and beyond 1000 compute-nodes.
 
Document type: Report
 
 
== Full document ==
 
<pdf>Media:Draft_Content_151831413-beopen66-5848-document.pdf</pdf>
 
  
  
Line 16: Line 11:
 
* [http://dx.doi.org/10.2172/1020344 http://dx.doi.org/10.2172/1020344]
 
* [http://dx.doi.org/10.2172/1020344 http://dx.doi.org/10.2172/1020344]
  
* [http://pdfs.semanticscholar.org/5672/0c75f9a4fb6db80b0c81d19ecdfd3bc7bcb3.pdf http://pdfs.semanticscholar.org/5672/0c75f9a4fb6db80b0c81d19ecdfd3bc7bcb3.pdf]
+
* [https://digital.library.unt.edu/ark:/67531/metadc843677/m2/1/high_res_d/1020344.pdf https://digital.library.unt.edu/ark:/67531/metadc843677/m2/1/high_res_d/1020344.pdf]
  
* [https://core.ac.uk/display/71274103 https://core.ac.uk/display/71274103],[https://academic.microsoft.com/#/detail/30262045 https://academic.microsoft.com/#/detail/30262045]
+
* [https://core.ac.uk/display/71274103 https://core.ac.uk/display/71274103],
 +
: [https://www.scipedia.com/public/Laney_Childs_2009a https://www.scipedia.com/public/Laney_Childs_2009a],
 +
: [https://www.osti.gov/servlets/purl/1020344 https://www.osti.gov/servlets/purl/1020344],
 +
: [https://academic.microsoft.com/#/detail/30262045 https://academic.microsoft.com/#/detail/30262045]

Latest revision as of 12:28, 22 January 2021

Abstract

We conducted a feasibility study to research modifications to data-flow architectures to enable data-flow to be distributed across multiple machines automatically. Distributed data-flow is a crucial technology to ensure that tools like the VisIt visualization application can provide in-situ data analysis and post-processing for simulations on peta-scale machines. We modified a version of VisIt to study load-balancing trade-offs between light-weight kernel compute environments and dedicated post-processing cluster nodes. Our research focused on memory overheads for contouring operations, which involves variable amounts of generated geometry on each node and computation of normal vectors for all generated vertices. Each compute node independently decided whether to send data to dedicated post-processing nodes at each stage of pipeline execution, depending on available memory. We instrumented the code to allow user settable available memory amounts to test extremely low-overhead compute environments. We performed initial testing of this prototype distributed streaming framework, but did not have time to perform scaling studies at and beyond 1000 compute-nodes.


Original document

The different versions of the original document can be found in:

https://www.scipedia.com/public/Laney_Childs_2009a,
https://www.osti.gov/servlets/purl/1020344,
https://academic.microsoft.com/#/detail/30262045
Back to Top

Document information

Published on 01/01/2009

Volume 2009, 2009
DOI: 10.2172/1020344
Licence: CC BY-NC-SA license

Document Score

0

Views 0
Recommendations 0

Share this document

claim authorship

Are you one of the authors of this document?