4.7 Article

Evaluating Modern GPU Interconnect: PCIe, NVLink, NV-SLI, NVSwitch and GPUDirect

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPDS.2019.2928289

Keywords

Performance evaluation; GPU; interconnect; NUMA; PCIe; NVLink; NVSwitch; SLI; GPUDirect; RDMA; NCCL

Funding

  1. Application Assessment program within the Exascale Computing Project of the U.S. Department of Energy Office of Science [17-SC-20-SC]
  2. U.S. Department of Energy Office of Science
  3. National Nuclear Security Administration
  4. U.S. DOE Office of Science, Office of Advanced Scientific Computing Research [66150]
  5. Office of Science of the U.S. Department of Energy [DE-AC05-00OR22725]
  6. U.S. Department of Energy [DE-AC05-76RL01830]
  7. High Performance Data Analytics (HPDA) program at PNNL

Ask authors/readers for more resources

High performance multi-GPU computing becomes an inevitable trend due to the ever-increasing demand on computation capability in emerging domains such as deep learning, big data and planet-scale simulations. However, the lack of deep understanding on how modern GPUs can be connected and the real impact of state-of-the-art interconnect technology on multi-GPU application performance become a hurdle. In this paper, we fill the gap by conducting a thorough evaluation on five latest types of modern GPU interconnects: PCIe, NVLink-V1, NVLink-V2, NVLink-SLI and NVSwitch, from six high-end servers and HPC platforms: NVIDIA P100-DGX-1, V100-DGX-1, DGX-2, OLCF's SummitDev and Summit supercomputers, as well as an SLI-linked system with two NVIDIA Turing RTX-2080 GPUs. Based on the empirical evaluation, we have observed four new types of GPU communication network NUMA effects: three are triggered by NVLink's topology, connectivity and routing, while one is caused by PCIe chipset design issue. These observations indicate that, for an application running in a multi-GPU node, choosing the right GPU combination can impose considerable impact on GPU communication efficiency, as well as the application's overall performance. Our evaluation can be leveraged in building practical multi-GPU performance models, which are vital for GPU task allocation, scheduling and migration in a shared environment (e.g., AI cloud and HPC centers), as well as communication-oriented performance tuning.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available