3.8 Proceedings Paper

spECK: Accelerating GPU Sparse Matrix-Matrix Multiplication through Lightweight Analysis

Publisher

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3332466.3374521

Keywords

SpGEMM; Sparse Matrix; GPU; Analysis

Funding

  1. German Research Foundation (DFG) [STE 2565/1-1]
  2. Austrian ScienceFund (FWF) [I 3007]
  3. Austrian Science Fund (FWF) [I3007] Funding Source: Austrian Science Fund (FWF)

Ask authors/readers for more resources

Sparse general matrix-matrix multiplication on GPUs is challenging due to the varying sparsity patterns of sparse matrices. Existing solutions achieve good performance for certain types of matrices, but fail to accelerate all kinds of matrices in the same manner. Our approach combines multiple strategies with dynamic parameter selection to dynamically choose and tune the best fitting algorithm for each row of the matrix. This choice is supported by a lightweight, multi-level matrix analysis, which carefully balances analysis cost and expected performance gains. Our evaluation on thousands of matrices with various characteristics shows that we outperform all currently available solutions in 79% over all matrices with >15k products and that we achieve the second best performance in 15%. For these matrices, our solution is on average 83% faster than the second best approach and up to 25x faster than other state-of-the-art GPU implementations. Using our approach, applications can expect great performance independent of the matrices they work on.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available