4.4 Article

Multi-GPU performance optimization of a computational fluid dynamics code using OpenACC

Journal

Publisher

WILEY
DOI: 10.1002/cpe.6036

Keywords

domain decomposition; GPUDirect; MPI; multi-GPU; OpenACC; performance optimization

Ask authors/readers for more resources

This article explores the multi-GPU performance of a 3D buoyancy driven cavity solver using MPI and OpenACC directives, revealing the significant impact of decomposing the problem in different dimensions on the strong scaling performance of the GPU. Various performance optimizations presented in the article can benefit performance using different decompositions, such as parallel message packing/unpacking and transferring different data based on stencil sizes. The optimizations aim to reduce communication costs and improve memory throughput between hosts and devices efficiently.
This article investigates the multi-GPU performance of a 3D buoyancy driven cavity solver using MPI and OpenACC directives on multiple platforms. The article shows that decomposing the total problem in different dimensions affects the strong scaling performance significantly for the GPU. Without proper performance optimizations, it is shown that 1D domain decomposition scales poorly on multiple GPUs due to the noncontiguous memory access. The performance using whatever decompositions can be benefited from a series of performance optimizations in the article. Since the buoyancy driven cavity code is communication-bounded on the clusters examined, a series of optimizations both agnostic and tailored to the platforms are designed to reduce the communication cost and improve memory throughput between hosts and devices efficiently. First, the parallel message packing/unpacking strategy developed for noncontiguous data movement between hosts and devices improves the overall performance by about a factor of 2. Second, transferring different data based on the stencil sizes for different variables further reduces the communication overhead. These two optimizations are general enough to be beneficial to stencil computations having ghost exchanges. Third, GPUDirect is used to improve the communication on clusters which have the hardware and software support for direct communication between GPUs without staging the memory of CPU. Finally, overlapping the communication and computations is shown to be not efficient on multi-GPUs if only using MPI or MPI+OpenACC. Although we believe our implementation has revealed enough communication and computation overlap, the actual running does not utilize the overlap well due to a lack of enough asynchronous progression.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.4
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available