4.7 Article

Distributed Training for Multi-Layer Neural Networks by Consensus

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2019.2921926

关键词

Training; Neural networks; Consensus algorithm; Distributed databases; Topology; Convergence; Network topology; Backpropagation; consensus; distributed training; graph theory; Lyapunov

资金

  1. Science and Technology Facilities Council through Newton Fund [ST/N006852/1]

向作者/读者索取更多资源

Over the past decade, there has been a growing interest in large-scale and privacy-concerned machine learning, especially in the situation where the data cannot be shared due to privacy protection or cannot be centralized due to computational limitations. Parallel computation has been proposed to circumvent these limitations, usually based on the master-slave and decentralized topologies, and the comparison study shows that a decentralized graph could avoid the possible communication jam on the central agent but incur extra communication cost. In this brief, a consensus algorithm is designed to allow all agents over the decentralized graph to converge to each other, and the distributed neural networks with enough consensus steps could have nearly the same performance as the centralized training model. Through the analysis of convergence, it is proved that all agents over an undirected graph could converge to the same optimal model even with only a single consensus step, and this can significantly reduce the communication cost. Simulation studies demonstrate that the proposed distributed training algorithm for multi-layer neural networks without data exchange could exhibit comparable or even better performance than the centralized training model.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据