3.8 Proceedings Paper

Federated Learning with Quantization Constraints

出版社

IEEE
DOI: 10.1109/icassp40776.2020.9054168

关键词

Federated learning; edge computing; quantization

资金

  1. Benoziyo Endowment Fund for the Advancement of Science
  2. Estate of Olga Klein -Astrachan
  3. European Unions Horizon 2020 research and innovation program [646804-ERC-COG-BNYQ]
  4. Israel Science Foundation [0100101]
  5. U.S. National Science Foundation [CCF-0939370, CCF-1513915]

向作者/读者索取更多资源

Traditional deep learning models are trained on centralized servers using labeled sample data collected from edge devices. This data often includes private information, which the users may not be willing to share. Federated learning (FL) is an emerging approach to train such learning models without requiring the users to share their possibly private labeled data. In FL, each user trains its copy of the learning model locally. The server then collects the individual updates and aggregates them into a global model. A major challenge that arises in this method is the need of each user to efficiently transmit its learned model over the throughput limited uplink channel. In this work, we tackle this challenge using tools from quantization theory. In particular, we identify the unique characteristics associated with conveying trained models over rate-constrained channels, and characterize a suitable quantization scheme for such setups. We show that combining universal vector quantization methods with FL yields a decentralized training system, which is both efficient and feasible. We also derive theoretical performance guarantees of the system. Our numerical results illustrate the substantial performance gains of our scheme over FL with previously proposed quantization approaches.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据