4.2 Article

GraNDe: Near-Data Processing Architecture With Adaptive Matrix Mapping for Graph Convolutional Networks

Journal

IEEE COMPUTER ARCHITECTURE LETTERS
Volume 21, Issue 2, Pages 45-48

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/LCA.2022.3182387

Keywords

Random access memory; Bandwidth; Sparse matrices; Performance evaluation; System-on-chip; Registers; Memory management; Near-data processing; DRAM; graph convolutional networks

Funding

  1. National Research Foundation of Korea (NRF)
  2. Korea government (MSIT) [NRF-2018R1A5A1059921]
  3. Institute of Information & Communications Technology Planning & Evaluation (IITP)
  4. Korea government (MSIT) under Artificial Intelligence Graduate School Program (Seoul National University) [2021-0-01343]
  5. Inha University Research Grant

Ask authors/readers for more resources

Graph Convolutional Network (GCN) models have high accuracy in interpreting graph data, with one of the key components being the aggregation operation. A proposed new architecture, GraNDe, accelerates memory-intensive aggregation operations and achieves a speedup of up to 4.3x on open-graph benchmark datasets compared to baseline systems.
Graph Convolutional Network (GCN) models have attracted attention given their high accuracy in interpreting graph data. One of the primary building blocks of a GCN model is aggregation, which gathers and averages the feature vectors corresponding to the vertices adjacent to each individual vertex. Aggregation works by multiplying the adjacency and feature matrices. The size of both matrices exceeds the on-chip cache capacity, and the adjacency matrix is highly sparse. These lead to little data reuse and cause numerous main-memory accesses during the aggregation process. Thus, aggregation exhibits memory-intensive characteristics. We propose GraNDe, an NDP architecture that accelerates memory-intensive aggregation operations by locating processing elements near the DRAM datapath to exploit rank-level parallelism. By exploring the data mapping of the operand matrices to DRAM ranks, we discover that the optimal mapping differs depending on the configuration of a specific GCN layer. With our optimal layer-by-layer mapping scheme, GraNDe shows a speedup up to 4.3x compared to the baseline system on open-graph benchmark datasets.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.2
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available