期刊
出版社
IEEE COMPUTER SOC
DOI: 10.1109/HiPC56025.2022.00018
关键词
Graph Neural Network; Graph Partitioning; Distributed Processing; Deep Learning
类别
资金
- DST/SERB through Start-up Research Grant [SRG/2021/001134]
Researchers propose a joint partitioning and sampling algorithm for addressing the challenges of distributed graph neural networks, which effectively reduces communication costs and improves accuracy.
Graph Neural Network (GNN) has emerged as a popular toolbox for solving complex problems on graph data structures. Graph neural networks use machine learning techniques to learn the vector representations of nodes and/or edges. Learning these representations demands a huge amount of memory and computing power. The traditional shared-memory multiprocessors are insufficient to meet real-world data's computing requirements; hence, research has gained momentum toward distributed GNN. Scaling the distributed GNN has the following challenges: (1) the input graph needs to be efficiently partitioned, (2) the cost of communication between compute nodes should be reduced, and (3) the sampling strategy should be efficiently chosen to minimize the loss in accuracy. To address these challenges, we propose a joint partitioning and sampling algorithm, which partitions the input graph with weighted METIS and uses a bias sampling strategy to minimize total communication costs. We implemented our approach using the DistDGL framework and evaluated it using several real-world datasets. We observe that our approach (1) shows an average reduction in communication overhead by 53%, (2) requires less partitioning time to partition a graph, (3) shows improved accuracy, (4) shows a speed up of 1.5x on OGB-Arxiv dataset, when compared to the state-of-the-art DistDGL implementation.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据