3.8 Proceedings Paper

Parallel and Distributed Training of Deep Neural Networks: A brief overview

出版社

IEEE
DOI: 10.1109/ines49302.2020.9147123

关键词

-

资金

  1. European Novel EOSC services for Emerging Atmosphere, Underwater and Space Challenges (NEANIAS) project [863448]
  2. National Research, Development and Innovation Office (NKFIH) under OTKA [K 132838]
  3. Doctoral School of Applied Informatics and Applied Mathematics, Obuda University

向作者/读者索取更多资源

Deep neural networks and deep learning are becoming important and popular techniques in modern services and applications. The training of these networks is computationally intensive, because of the extreme number of trainable parameters and the large amount of training samples. In this brief overview, current solutions aiming to speed up this training process via parallel and distributed computation are introduced. The necessary components and strategies are described from the low-level communication protocols to the high-level frameworks for the distributed deep learning. The current implementations of the deep learning frameworks with distributed computational capabilities are compared and key parameters are identified to help design effective solutions.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据