4.7 Article

Asynchronous Federated Learning Over Wireless Communication Networks

期刊

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS
卷 21, 期 9, 页码 6961-6978

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TWC.2022.3153495

关键词

Training; Wireless communication; Convergence; Computational modeling; Adaptation models; Optimization; Collaborative work; Asynchronous federated learning; model fusion; model staleness; distributed learning

资金

  1. National Key Research and Development Program of China [2020YFB1807101, 2018YFB1801104]
  2. National Natural Science Foundation of China [U20A20158, 61725104, U21B2029]
  3. National Research Foundation, Singapore
  4. Infocomm Media Development Authority under its Future Communications Research and Development Program
  5. Singapore University of Technology and Design (SUTD) Growth Plan Grant for Artificial Intelligence (AI)

向作者/读者索取更多资源

This paper proposes a novel asynchronous federated learning framework that adapts to the heterogeneity of users, communication environments, and learning tasks by considering delays in training and uploading local models and the resulting staleness among received models. A centralized fusion algorithm is designed to determine fusion weights during global updates, aiming to achieve fast and smooth convergence while enhancing training efficiency.
The conventional federated learning (FL) framework usually assumes synchronous reception and fusion of all the local models at the central aggregator and synchronous updating and training of the global model at all the agents as well. However, in a wireless network, due to limited radio resource, inevitable transmission failures and heterogeneous computing capacity, it is very hard to realize strict synchronization among all the involved user equipments (UEs). In this paper, we propose a novel asynchronous FL framework, which well adapts to the heterogeneity of users, communication environments and learning tasks, by considering both the possible delays in training and uploading the local models and the resultant staleness among the received models that has heavy impact on the global model fusion. A novel centralized fusion algorithm is designed to determine the fusion weight during the global update, which aims to make full use of the fresh information contained in the uploaded local models while avoiding the biased convergence by enforcing the impact of each UE's local dataset to be proportional to its sample share. Numerical experiments validate that the proposed asynchronous FL framework can achieve fast and smooth convergence and enhance the training efficiency significantly.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据