4.7 Article

Fast-Convergent Federated Learning

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSAC.2020.3036952

关键词

Collaborative work; Convergence; Data models; Computational modeling; Servers; Performance evaluation; Optimization; Federated learning; distributed optimization; fast convergence rate

资金

  1. Defense Advanced Research Projects Agency (DARPA) [AWD1005371, AWD1005468]
  2. U.S. National Science Foundation [CCF-1908308]

向作者/读者索取更多资源

This paper proposes a fast-convergent federated learning algorithm called FOLB, which optimizes the convergence speed of model training through intelligent sampling of devices, handles device heterogeneity, and experimentally demonstrates its improvement in model accuracy, convergence speed, and stability across various tasks.
Federated learning has emerged recently as a promising solution for distributing machine learning tasks through modern networks of mobile devices. Recent studies have obtained lower bounds on the expected decrease in model loss that is achieved through each round of federated learning. However, convergence generally requires a large number of communication rounds, which induces delay in model training and is costly in terms of network resources. In this paper, we propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training to optimize the expected convergence speed. We first theoretically characterize a lower bound on improvement that can be obtained in each round if devices are selected according to the expected improvement their local models will provide to the current global model. Then, we show that FOLB obtains this bound through uniform sampling by weighting device updates according to their gradient information. FOLB is able to handle both communication and computation heterogeneity of devices by adapting the aggregations according to estimates of device's capabilities of contributing to the updates. We evaluate FOLB in comparison with existing federated learning algorithms and experimentally show its improvement in trained model accuracy, convergence speed, and/or model stability across various machine learning tasks and datasets.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据