4.4 Article

Adaptive Federated Learning With Non-IID Data

期刊

COMPUTER JOURNAL
卷 -, 期 -, 页码 -

出版社

OXFORD UNIV PRESS
DOI: 10.1093/comjnl/bxac118

关键词

Federated Learning; Model Aggregation; Non-IID

资金

  1. Technology Research and Development Program of China [2019YFB2102100]
  2. National Natural Science Foundation of China [62072146, 61972358]
  3. Key Research and Development Program of Zhejiang Province [2021C03187, 2019C03134]

向作者/读者索取更多资源

This paper presents a federated learning algorithm called FedDynamic to tackle the statistical challenge caused by non-independent and identically distributed (Non-IID) data. By setting different weights for model aggregation and dynamically adjusting the weights based on key indices, the FedDynamic algorithm achieves better accuracy and convergence performance compared to other existing algorithms.
With the widespread use of Internet of things(IoT) devices, it generates an enormous volume of data, and it is a challenge to mine the IoT data value while ensuring security and privacy. Federated learning is a decentralized approach for training data located on edge devices, such as mobile phones and IoT devices, while keeping privacy, efficiency, and security. However, the Non-IID (non-independent and identically distributed) data, always greatly impacts the performance of the global model. In this paper, we propose a FedDynamic algorithm to solve the statistical challenge of federated learning caused by Non-IID. As Non-IID data can lead to significant differences in model parameters between edge devices, we set different weights for different devices during model aggregation to get a high-performance global model. We analyze and exact key indices (local model accuracy, local data quality, and model difference between local models and the global model), which can reflect the quality of the model, and calculate the aggregation weight for edge devices based on the key indices. Furthermore, we dynamically adjust aggregation weight based on accuracy's variety to solve weight staleness during the training process. Experiments on the MNIST, FMNIST, EMNIST, CINIC-10 and CIFAR-10 datasets show that the FedDynamic algorithm has better accuracy and convergence performance, compared to the FedAvg, FedProx and Scaffold algorithms.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.4
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据