4.7 Article

Distributed Linearized Alternating Direction Method of Multipliers for Composite Convex Consensus Optimization

期刊

IEEE TRANSACTIONS ON AUTOMATIC CONTROL
卷 63, 期 1, 页码 5-20

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TAC.2017.2713046

关键词

Alternating direction method of multipliers (ADMM); composite convex function; distributed optimization; first-order method; linearized ADMM

资金

  1. NSF [CMMI-1400217, CMMI-1635106]
  2. ARO [W911NF-17-1-0298]
  3. Department of Mathematics at UC Davis

向作者/读者索取更多资源

Given an undirected graph G = (N, epsilon) of agents N = {1,..., N} connected with edges in epsilon, we study how to compute an optimal decision on which there is consensus among agents and that minimizes the sum of agent-specific private convex composite functions {Phi(i)}(i is an element of N), where Phi(i) (sic) xi(i) + f(i) belongs to agent-i. Assuming only agents connected by an edge can communicate, we propose a distributed proximal gradient algorithm (DPGA) for consensus optimization over both unweighted and weighted static (undirected) communication networks. In one iteration, each agent-i computes the prox map of xi(i) and gradient of fi, and this is followed by local communication with neighboring agents. We also study its stochastic gradient variant, SDPGA, which can only access to noisy estimates of del f(i) at each agent-i. This computational model abstracts a number of applications in distributed sensing, machine learning and statistical inference. We show ergodic convergence in both suboptimality error and consensus violation for the DPGA and SDPGA with rates O(1/t) and O(1/root t), respectively.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据