4.7 Article

Sample-Based and Feature-Based Federated Learning for Unconstrained and Constrained Nonconvex Optimization via Mini-batch SSCA

期刊

IEEE TRANSACTIONS ON SIGNAL PROCESSING
卷 70, 期 -, 页码 3832-3847

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TSP.2022.3185895

关键词

Optimization; Signal processing algorithms; Stochastic processes; Privacy; Approximation algorithms; Servers; Computational modeling; Federated learning; nonconvex optimization; stochastic optimization; stochastic successive convex approximation

资金

  1. National Key R&D Program of China [2018YFB1801102]
  2. Natural Science Foundation of Shanghai [20ZR1425300]

向作者/读者索取更多资源

This paper investigates sample-based and feature-based federated optimization, proposing FL algorithms using stochastic successive convex approximation (SSCA) and mini-batch techniques. The proposed algorithms adequately exploit the structures of the objective and constraint functions, and incrementally utilize samples. Numerical experiments demonstrate the inherent advantages of the proposed algorithms in convergence speeds, communication and computation costs, and model specifications.
Federated learning (FL) has become a hot research area in enabling the collaborative training of machine learning models among multiple clients that hold sensitive local data. Nevertheless, unconstrained federated optimization has been studied mainly using stochastic gradient descent (SGD), which may converge slowly, and constrained federated optimization, which is more challenging, has not been investigated so far. This paper investigates sample-based and feature-based federated optimization, respectively, and considers both unconstrained and constrained nonconvex problems for each of them. First, we propose FL algorithms using stochastic successive convex approximation (SSCA) and mini-batch techniques. These algorithms can adequately exploit the structures of the objective and constraint functions and incrementally utilize samples. We show that the proposed FL algorithms converge to stationary points and Karush-Kuhn-Tucker (KKT) points of the respective unconstrained and constrained nonconvex problems, respectively. Next, we provide algorithm examples with appealing computational complexity and communication load per communication round. We show that the proposed algorithm examples for unconstrained federated optimization are identical to FL algorithms via momentum SGD and provide an analytical connection between SSCA and momentum SGD. Finally, numerical experiments demonstrate the inherent advantages of the proposed algorithms in convergence speeds, communication and computation costs, and model specifications.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据