4.3 Article

High performance sparse multifrontal solvers on modern GPUs

期刊

PARALLEL COMPUTING
卷 110, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.parco.2022.102897

关键词

Sparse; Direct solver; Multifrontal; GPU; CUDA; HIP

资金

  1. Exascale Computing Project U.S. Department of Energy Office of Science [17-SC-20-SC]
  2. National Nuclear Security Administration
  3. U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Scientific Discovery through Advanced Computing (SciDAC) program through the FASTMath Institute at Lawrence Berkeley National Laboratory [DE-AC02-05CH11231]

向作者/读者索取更多资源

The numerical factorization and triangular solve phases of the sparse direct solver STRUMPACK have been ported to GPU, achieving high performance through various optimizations.
We have ported the numerical factorization and triangular solve phases of the sparse direct solver STRUMPACK to GPU. STRUMPACK implements sparse LU factorization using the multifrontal algorithm, which performs most of its operations in dense linear algebra operations on so-called frontal matrices of various sizes. Our GPU implementation off-loads these dense linear algebra operations, as well as the sparse scatter-gather operations between frontal matrices. For the larger frontal matrices, our GPU implementation relies on vendor libraries such as cuBLAS and cuSOLVER for NVIDIA GPUs and rocBLAS and rocSOLVER for AMD GPUs. For the smaller frontal matrices we developed custom CUDA and HIP kernels to reduce kernel launch overhead. Overall, high performance is achieved by identifying submatrix factorizations corresponding to sub-trees of the multifrontal assembly tree which fit entirely in GPU memory. The multi-GPU setting uses SLATE (Software for Linear Algebra Targeting Exascale) as a modern GPU-aware replacement for ScaLAPACK. On 4 nodes of SUMMIT the code runs similar to 10x faster when using all 24 V100 GPUs compared to when it only uses the 168 POWER9 cores. On 8 SUMMIT nodes, using 48 V100 GPUs, the sparse solver reaches over 50TFlop/s. Compared to SuperLU, on a single V100, for a set of 17 matrices our implementation is faster for all but one matrix, and is on average 5x (median 4x) faster

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.3
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据