期刊
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
卷 63, 期 1, 页码 5-20出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TAC.2017.2713046
关键词
Alternating direction method of multipliers (ADMM); composite convex function; distributed optimization; first-order method; linearized ADMM
资金
- NSF [CMMI-1400217, CMMI-1635106]
- ARO [W911NF-17-1-0298]
- Department of Mathematics at UC Davis
Given an undirected graph G = (N, epsilon) of agents N = {1,..., N} connected with edges in epsilon, we study how to compute an optimal decision on which there is consensus among agents and that minimizes the sum of agent-specific private convex composite functions {Phi(i)}(i is an element of N), where Phi(i) (sic) xi(i) + f(i) belongs to agent-i. Assuming only agents connected by an edge can communicate, we propose a distributed proximal gradient algorithm (DPGA) for consensus optimization over both unweighted and weighted static (undirected) communication networks. In one iteration, each agent-i computes the prox map of xi(i) and gradient of fi, and this is followed by local communication with neighboring agents. We also study its stochastic gradient variant, SDPGA, which can only access to noisy estimates of del f(i) at each agent-i. This computational model abstracts a number of applications in distributed sensing, machine learning and statistical inference. We show ergodic convergence in both suboptimality error and consensus violation for the DPGA and SDPGA with rates O(1/t) and O(1/root t), respectively.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据