期刊
IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING
卷 7, 期 3, 页码 1421-1430出版社
IEEE COMPUTER SOC
DOI: 10.1109/TNSE.2019.2933177
关键词
Convergence; Optimization; Linear programming; Convex functions; Laplace equations; Machine learning; Training; Distributed convex optimization; machine learning; augmented Lagrange; stochastic averaging gradient
资金
- National Natural Science Foundation of China [61773321, 61673080]
- Innovation Support Program for Chongqing Overseas Returnees [cx2017043]
- Chongqing Postdoctoral Science Foundation [Xm2017100]
This paper investigates distributed optimization problems where a group of networked nodes collaboratively minimizes the sum of all local objective functions. The local objective function of each node is further set as an average of a finite set of subfunctions. This adjustment is motivated by machine learning problems with large training samples distributed and known privately to individual computational nodes. An augmented Lagrange (AL) stochastic gradient algorithm is presented to address the distributed optimization problem, which is integrated with the factorization of weighted Laplacian and local unbiased stochastic averaging gradient methods. At each iteration, only one randomly selected gradient of a subfunction is evaluated at a node, and a variance-reduced stochastic averaging gradient technique is applied to approximate the gradient of local objective function. Strong convexity of the local subfunction and Lipschitz continuity of its gradient are shown to ensure a linear convergence rate of the proposed algorithm in expectation. Numerical experiments on a logistic regression problem demonstrate the correctness of theoretical results.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据