Abstract

Reconfigurable systems, and in particular, FPGA-based custom computing machines, offer a unique opportunity to define application-specific architectures. These architectures offer performance advantages for application domains such as image processing, where the use of customized pipelines exploits the inherent coarse-grain parallelism. In this paper we describe a set of program analyses and an implementation that map a sequential and un-annotated C program into a pipelined implementation running on a set of FPGAs, each with multiple external memories. Based on well-known parallel computing analysis techniques, our algorithms perform unrolling for operator parallelization, reuse and data layout for memory parallelization and precise communication analysis. We extend these techniques for FPGA-based systems to automatically partition the application data and computation into custom pipeline stages, taking into account the available FPGA and interconnect resources. We illustrate the analysis components by way of an example, a machine vision program. We present the algorithm results, derived with minimal manual intervention, which demonstrate the potential of this approach for automatically deriving pipelined designs from high-level sequential specifications.


Original document

The different versions of the original document can be found in:

http://dx.doi.org/10.1109/FPGA.2002.1106663,
https://dblp.uni-trier.de/db/conf/fccm/fccm2002.html#ZieglerSHD02,
https://dl.acm.org/citation.cfm?id=795952,
https://ieeexplore.ieee.org/document/1106663,
https://academic.microsoft.com/#/detail/2151264479
http://dx.doi.org/10.1109/fpga.2002.1106663
Back to Top

Document information

Published on 01/01/2003

Volume 2003, 2003
DOI: 10.1109/fpga.2002.1106663
Licence: CC BY-NC-SA license

Document Score

0

Views 0
Recommendations 0

Share this document

Keywords

claim authorship

Are you one of the authors of this document?