4.7 Article

Secure Distributed On-Device Learning Networks with Byzantine Adversaries

期刊

IEEE NETWORK
卷 33, 期 6, 页码 180-187

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/MNET.2019.1900025

关键词

Servers; Reliability; Computational modeling; Bandwidth; Privacy; Knowledge engineering; Convergence

资金

  1. UBC FourYear Doctoral Fellowship
  2. Natural Science and Engineering Research Council of Canada
  3. National Engineering Laboratory for Big Data System Computing Technology at Shenzhen University, China

向作者/读者索取更多资源

Privacy concerns exist when the central server has copies of datasets. Hence, there is a paradigm shift for learning networks to change from centralized in-cloud learning to distributed on-device learning. Benefitting from parallel computing, on-device learning networks have a lower bandwidth requirement than in-cloud learning networks. Moreover, on-device learning networks also have several desirable characteristics such as privacy preserving and flexibility. However, on-device learning networks are vulnerable to the malfunctioning terminals across the networks. The worst-case malfunctioning terminals are the Byzantine adversaries, that can perform arbitrary harmful operations to compromise the learned model based on the full knowledge of the networks. Hence, the design of secure learning algorithms becomes an emerging topic in the on-device learning networks with Byzantine adversaries. In this article, we present a comprehensive overview of the prevalent secure learning algorithms for the two promising on-device learning networks: Federated-Learning networks and decentralized-learning networks. We also review several future research directions in the Federated- Learning and decentralized-learning networks.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据