3.8 Proceedings Paper

Parallel and Distributed Training of Deep Neural Networks: A brief overview

Publisher

IEEE
DOI: 10.1109/ines49302.2020.9147123

Keywords

-

Funding

  1. European Novel EOSC services for Emerging Atmosphere, Underwater and Space Challenges (NEANIAS) project [863448]
  2. National Research, Development and Innovation Office (NKFIH) under OTKA [K 132838]
  3. Doctoral School of Applied Informatics and Applied Mathematics, Obuda University

Ask authors/readers for more resources

Deep neural networks and deep learning are becoming important and popular techniques in modern services and applications. The training of these networks is computationally intensive, because of the extreme number of trainable parameters and the large amount of training samples. In this brief overview, current solutions aiming to speed up this training process via parallel and distributed computation are introduced. The necessary components and strategies are described from the low-level communication protocols to the high-level frameworks for the distributed deep learning. The current implementations of the deep learning frameworks with distributed computational capabilities are compared and key parameters are identified to help design effective solutions.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available