3.8 Proceedings Paper

On the Feasibility of Hybrid Electrical/Optical Switch Architecture for Large-Scale Training of Distributed Deep Learning

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/PHOTONICS49561.2019.00007

Keywords

Distributed Deep Learning; High Performance Computing (HPC); Optical Circuit Switching; Hybrid Switching

Ask authors/readers for more resources

Data parallelism is the dominant method used to train deep learning (DL) model on High-Performance Computing systems such as large-scale GPU clusters. When training a DL model on a large number of nodes, inter-node communication becomes bottle-neck due to its relatively higher latency and lower link bandwidth (than intra-node communication). To cope with this problem, some techniques have been proposed to (a) optimize the collective communication algorithms that take into account the network topology, (b) reduce the message size, and (c) overlap the communication and computation. All of these approaches target to deal with the large message size issue while diminishing the effect of the limitation of the inter-node network. In this study, we investigate the benefit of increasing inter-node link bandwidth by using the hybrid switching systems, i.e., Electrical Packet Switching and Optical Circuit Switching. We found that the typical data-transfer of synchronous data-parallelism training are long-live and rarely changed that can be speed-up with optical switching. Simulation results on Simgrid simulator show that our approach speed-up the training time of deep learning application around 10%.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available