4.4 Article

Advancing on an efficient sparse matrix multiplication kernel for modern GPUs

Journal

Publisher

WILEY
DOI: 10.1002/cpe.7271

Keywords

bmSparse; GPUs; sparse matrix multiplication; Tensor Cores

Funding

  1. Universidad de la Republica
  2. PEDECIBA

Ask authors/readers for more resources

Sparse matrix multiplication has become increasingly important in data science and machine learning applications, leading to research focusing on accelerating this kernel in GPUs. Introducing new sparse matrix storage formats to mitigate irregularity, optimizations can significantly outperform existing implementations in experiments and compete with mature algorithms.
The sparse matrix multiplication (SpGeMM) increased its importance in the last years due to its data science and machine learning applications. Consequently, considerable research has focused on accelerating this kernel in GPUs. Designing massively-parallel algorithms for the SpGeMM is a challenging task since the computation pattern is highly irregular, and the required memory and operations depend on the interaction between the nonzero layout of the inputs. One strategy to attack this kernel consists of proposing new sparse matrix storage formats that contribute to mitigating this irregularity. In previous work, we commenced a study of the recently proposed bmSparse matrix format, suggesting several modifications to the SpGeMM algorithm. This work integrates the previous extensions and proposes new improvements to unleash bmSparse's full potential before comparing it to more consolidated options. In particular, we enhance one of the most computationally demanding stages with an adaptive technique, apply optimizations to achieve more efficient data accesses, and analyze the effect of using Tensor Cores to accelerate the multiplication stage of the algorithm. The experimental results on a set of real-world sparse matrices show that the optimized implementation largely outperforms vendor implementations such as NVIDIA cuSparse Intel MKL-CSR variant, while being competitive with MKL's-BSR.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.4
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available