4.6 Article

OpSparse: A Highly Optimized Framework for Sparse General Matrix Multiplication on GPUs

Journal

IEEE ACCESS
Volume 10, Issue -, Pages 85960-85974

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2022.3196940

Keywords

Sparse general matrix multiplication; SpGEMM; GPU; high-performance computing

Ask authors/readers for more resources

Sparse general matrix multiplication (SpGEMM) is an important computation in many applications, but achieving high-performance SpGEMM on modern processors is challenging. Existing SpGEMM libraries focus on algorithm design but neglect low-level architecture-specific optimizations, resulting in inefficient implementations. This paper proposes a highly optimized SpGEMM library called OpSparse, which improves performance through various optimization techniques such as optimizing memory utilization, reducing access to hash tables, and improving execution parallelism. Evaluation results on an Nvidia Tesla V100 GPU show significant speedups compared to state-of-the-art SpGEMM libraries.
Sparse general matrix multiplication (SpGEMM) is an important and expensive computation primitive in many real-world applications. Due to SpGEMM's inherent irregularity and the vast diversity of its input matrices, developing high-performance SpGEMM implementation on modern processors such as GPUs is challenging. The state-of-the-art SpGEMM libraries (i.e., nsparse and spECK ) adopt several algorithms to tackle the challenges of global load balance, local load balance, and allocation of the result matrix. While these libraries focus on the high-level algorithm design for SpGEMM, they neglect several low-level architecture-specific optimizations, which causes inefficient implementations in their libraries. In this paper, we classify their inefficient implementations into several categories. Based on our observations, we propose a highly optimized SpGEMM library called OpSparse . The optimizations in OpSparse include 1) optimizing the binning method by improving the utilization of the shared memory, 2) optimizing the hashing method by reducing the accesses to the hash tables, 3) improving the trade-off between hash collision rate and hardware utilization in the hashing method by setting appropriate binning ranges, 4) reducing the overheads of global memory utilization by minimizing the global memory usage of the metadata, and 5) improving the execution parallelism by overlapping global memory allocation with kernel execution. Performance evaluations with 26 commonly used matrices on an Nvidia Tesla V100 GPU show that OpSparse achieves on average 7.35x (up to 27.8x ), 1.43x (up to 1.81x ), and 1.52x (up to 2.04x ) speedups over three state-of-the-art SpGEMM libraries: cuSPARSE , nsparse , and spECK , respectively.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available