4.7 Article

Stochastic Proximal Gradient Consensus Over Random Networks

期刊

IEEE TRANSACTIONS ON SIGNAL PROCESSING
卷 65, 期 11, 页码 2933-2948

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TSP.2017.2673815

关键词

Distributed optimization; ADMM; rate analysis; fast algorithms

资金

  1. National Science Foundation [CCF-1526078]
  2. Air Force Office of Scientific Research [15RT0767]
  3. National Natural Science Foundation of China [61571385]
  4. Direct For Computer & Info Scie & Enginr
  5. Division of Computing and Communication Foundations [1526078] Funding Source: National Science Foundation

向作者/读者索取更多资源

We consider solving a convex optimization problem with possibly stochastic gradient, and over a randomly time-varying multiagent network. Each agent has access to some local objective function, and it only has unbiased estimates of the gradients of the smooth component. We develop a dynamic stochastic proximal-gradient consensus algorithm, with the following key features: 1) it works for both the static and certain randomly time-varying networks; 2) it allows the agents to utilize either the exact or stochastic gradient information; 3) it is convergent with provable rate. In particular, the proposed algorithm converges to a global optimal solution, with a rate of O(1/r) [resp. O(1/root r)] when the exact (resp. stochastic) gradient is available, where r is the iteration counter. Interestingly, the developed algorithm establishes a close connection among a number of (seemingly unrelated) distributed algorithms, such as the EXTRA, the PG-EXTRA, the IC/IDC-ADMM, the DLM, and the classical distributed subgradient method.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据