4.8 Article

Communication-efficient federated learning

Publisher

NATL ACAD SCIENCES
DOI: 10.1073/pnas.2024789118

Keywords

machine learning; federated learning; wireless communications

Funding

  1. Key Area R&D Program of Guangdong Province [2018B030338001]
  2. National Key R&D Program of China [2018YFB1800800]
  3. Shenzhen Outstanding Talents Training Fund
  4. European Union [646804-ERC-COG-BNYQ]
  5. Israel Science Foundation [0100101]
  6. US NSF [CCF-1908208]
  7. Guangdong Research Project [2017ZT07X152]

Ask authors/readers for more resources

Federated Learning allows edge devices to collaboratively train ML models without sharing private data, but communication delays are a major bottleneck. A communication-efficient framework is proposed, incorporating device selection and parameter quantization to improve convergence speed and training accuracy.
Federated learning (FL) enables edge devices, such as Internet of Things devices (e.g., sensors), servers, and institutions (e.g., hospitals), to collaboratively train a machine learning (ML) model without sharing their private data. FL requires devices to exchange their ML parameters iteratively, and thus the time it requires to jointly learn a reliable model depends not only on the number of training steps but also on the ML parameter transmission time per step. In practice, FL parameter transmissions are often carried out by a multitude of participating devices over resource-limited communication networks, for example, wireless networks with limited bandwidth and power. Therefore, the repeated FL parameter transmission from edge devices induces a notable delay, which can be larger than the ML model training time by orders of magnitude. Hence, communication delay constitutes a major bottleneck in FL. Here, a communication-efficient FL framework is proposed to jointly improve the FL convergence time and the training loss. In this framework, a probabilistic device selection scheme is designed such that the devices that can significantly improve the convergence speed and training loss have higher probabilities of being selected for ML model transmission. To further reduce the FL convergence time, a quantization method is proposed to reduce the volume of the model parameters exchanged among devices, and an efficient wireless resource allocation scheme is developed. Simulation results show that the proposed FL framework can improve the identification accuracy and convergence time by up to 3.6% and 87% compared to standard FL.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available