4.2 Article

Automatic Creation of High-bandwidth Memory Architectures from Domain-specific Languages: The Case of Computational Fluid Dynamics

Publisher

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3563553

Keywords

High-level synthesis; domain-specific languages; computational-fluid dynamics; MLIR; automatic memory generation; HBM

Ask authors/readers for more resources

This article proposes an automated tool flow for generating massively parallel accelerators on high-bandwidth-memory-equipped FPGAs from a domain-specific language. The method allows designers to integrate and evaluate various compiler or hardware optimizations. Experimental results show that this approach enables efficient data movement and processing, and achieves up to 103 GFLOPS with one compute unit on a Xilinx Alveo U280, which is up to 25x more energy efficient than expert-crafted Intel CPU implementations.
Numerical simulations can help solve complex problems. Most of these algorithms are massively parallel and thus good candidates for FPGA acceleration thanks to spatial parallelism. Modern FPGA devices can leverage high-bandwidth memory technologies, but when applications are memory-bound designers must craft advanced communication and memory architectures for efficient data movement and on-chip storage. This development process requires hardware design skills that are uncommon in domain-specific experts. In this article, we propose an automated tool flowfrom a domain-specific language for tensor expressions to generate massively parallel accelerators on high-bandwidth-memory-equipped FPGAs. Designers can use this flow to integrate and evaluate various compiler or hardware optimizations. We use computational fluid dynamics (CFD) as a paradigmatic example. Our flow starts from the high-level specification of tensor operations and combines a multi-level intermediate representation-based compiler with an in-house hardware generation flow to generate systems with parallel accelerators and a specialized memory architecture that moves data efficiently, aiming at fully exploiting the available CPU-FPGA bandwidth. We simulated applications with millions of elements, achieving up to 103 GFLOPS with one compute unit and custom precision when targeting a Xilinx Alveo U280. Our FPGA implementation is up to 25x more energy efficient than expert-crafted Intel CPU implementations.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.2
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available