3.8 Proceedings Paper

Sparse matrix-vector multiplication on GPGPU clusters: A new storage format and a scalable implementation

出版社

IEEE
DOI: 10.1109/IPDPSW.2012.211

关键词

GPGPU; Sparse matrices; CUDA

资金

  1. Office of Science of the U.S. Department of Energy [DE-AC02-05CH11231]
  2. competence network for scientific high performance computing in Bavaria (KONWIHR) via the project HQS@HPC-II

向作者/读者索取更多资源

Sparse matrix-vector multiplication (spMVM) is the dominant operation in many sparse solvers. We investigate performance properties of spMVM with matrices of various sparsity patterns on the nVidia Fermi class of GPGPUs. A new padded jagged diagonals storage (pJDS) format is proposed which may substantially reduce the memory overhead intrinsic to the widespread ELLPACK-R scheme while making no assumptions about the matrix structure. In our test scenarios the pJDS format cuts the overall spMVM memory footprint on the GPGPU by up to 70%, and achieves 91% to 130% of the ELLPACK-R performance. Using a suitable performance model we identify performance bottlenecks on the node level that invalidate some types of matrix structures for efficient multi-GPGPU parallelization. For appropriate sparsity patterns we extend previous work on distributed-memory parallel spMVM to demonstrate a scalable hybrid MPI-GPGPU code, achieving efficient overlap of communication and computation.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据