4.7 Article

Accelerating Federated Learning via Momentum Gradient Descent

期刊

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPDS.2020.2975189

关键词

Convergence; Machine learning; Servers; Distributed databases; Data models; Acceleration; Computational modeling; Accelerating convergence; distributed machine learning; federated learning; momentum gradient descent

资金

  1. National Key Research and Development Program of China [2018YFA0701603]
  2. National Natural Science Foundation of China [61722114]
  3. USTC Research Funds of the Double First-Class Initiative [YD3500002001]

向作者/读者索取更多资源

Federated learning (FL) provides a communication-efficient approach to solve machine learning problems concerning distributed data, without sending raw data to a central server. However, existing works on FL only utilize first-order gradient descent (GD) and do not consider the preceding iterations to gradient update which can potentially accelerate convergence. In this article, we consider momentum term which relates to the last iteration. The proposed momentum federated learning (MFL) uses momentum gradient descent (MGD) in the local update step of FL system. We establish global convergence properties of MFL and derive an upper bound on MFL convergence rate. Comparing the upper bounds on MFL and FL convergence rates, we provide conditions in which MFL accelerates the convergence. For different machine learning models, the convergence performance of MFL is evaluated based on experiments with MNIST and CIFAR-10 datasets. Simulation results confirm that MFL is globally convergent and further reveal significant convergence improvement over FL.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据