4.7 Article

Self-Balancing Federated Learning With Global Imbalanced Data in Mobile Systems

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPDS.2020.3009406

Keywords

Distributed databases; Training; Machine learning; Mobile handsets; Data models; Servers; Neural networks; Federated learning; distributed machine learning; neural networks

Funding

  1. National Natural Science Foundation of China [61672116, 61601067, 61802038, 61672115]
  2. Chongqing High-Tech Research Key Program [cstc2019jscx-mbdx0063]
  3. Fundamental Research Funds for the Central Universities [0214005207005, 2019CDJGFJSJ001]
  4. Chongqing Youth Talent Support Program
  5. China Postdoctoral Science Foundation [2017M620412]

Ask authors/readers for more resources

Federated Learning (FL) is a distributed deep learning method where multiple devices contribute to a neural network training while keeping their data private. Data imbalance in mobile systems can lead to accuracy degradation in FL applications, but the Astraea framework offers improvements through data augmentation and rescheduling. Compared to FedAvg, Astraea demonstrates higher accuracy and reduced communication traffic.
Federated learning (FL) is a distributed deep learning method that enables multiple participants, such as mobile and IoT devices, to contribute a neural network while their private training data remains in local devices. This distributed approach is promising in the mobile systems where have a large corpus of decentralized data and require high privacy. However, unlike the common datasets, the data distribution of the mobile systems is imbalanced which will increase the bias of model. In this article, we demonstrate that the imbalanced distributed training data will cause an accuracy degradation of FL applications. To counter this problem, we build a self-balancing FL framework named Astraea, which alleviates the imbalances by 1) Z-score-based data augmentation, and 2) Mediator-based multi-client rescheduling. The proposed framework relieves global imbalance by adaptive data augmentation and downsampling, and for averaging the local imbalance, it creates the mediator to reschedule the training of clients based on Kullback-Leibler divergence (KLD) of their data distribution. Compared with FedAvg, the vanilla FL algorithm, Astraea shows +4.39 and +6.51 percent improvement of top-1 accuracy on the imbalanced EMNIST and imbalanced CINIC-10 datasets, respectively. Meanwhile, the communication traffic of Astraea is reduced by 75 percent compared to FedAvg.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available