4.7 Article

Demystifying Parallel and Distributed Deep Learning: An In-depth Concurrency Analysis

Journal

ACM COMPUTING SURVEYS
Volume 52, Issue 4, Pages -

Publisher

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3320060

Keywords

Deep learning; distributed computing; parallel algorithms

Funding

  1. ETH Zurich Postdoctoral Fellowship
  2. Marie Curie Actions for People COFUND program
  3. European Research Council (ERC) under the European Union's Horizon 2020 programme [678880]

Ask authors/readers for more resources

Deep Neural Networks (DNNs) are becoming an important tool in modern computing applications. Accelerating their training is a major challenge and techniques range from distributed algorithms to low-level circuit design. In this survey, we describe the problem from a theoretical perspective, followed by approaches for its parallelization. We present trends in DNN architectures and the resulting implications on parallelization strategies. We then review and model the different types of concurrency in DNNs: from the single operator, through parallelism in network inference and training, to distributed deep learning. We discuss asynchronous stochastic optimization, distributed system architectures, communication schemes, and neural architecture search. Based on those approaches, we extrapolate potential directions for parallelism in deep learning.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available