4.7 Article

Stochastic Proximal Gradient Consensus Over Random Networks

Journal

IEEE TRANSACTIONS ON SIGNAL PROCESSING
Volume 65, Issue 11, Pages 2933-2948

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TSP.2017.2673815

Keywords

Distributed optimization; ADMM; rate analysis; fast algorithms

Funding

  1. National Science Foundation [CCF-1526078]
  2. Air Force Office of Scientific Research [15RT0767]
  3. National Natural Science Foundation of China [61571385]
  4. Direct For Computer & Info Scie & Enginr
  5. Division of Computing and Communication Foundations [1526078] Funding Source: National Science Foundation

Ask authors/readers for more resources

We consider solving a convex optimization problem with possibly stochastic gradient, and over a randomly time-varying multiagent network. Each agent has access to some local objective function, and it only has unbiased estimates of the gradients of the smooth component. We develop a dynamic stochastic proximal-gradient consensus algorithm, with the following key features: 1) it works for both the static and certain randomly time-varying networks; 2) it allows the agents to utilize either the exact or stochastic gradient information; 3) it is convergent with provable rate. In particular, the proposed algorithm converges to a global optimal solution, with a rate of O(1/r) [resp. O(1/root r)] when the exact (resp. stochastic) gradient is available, where r is the iteration counter. Interestingly, the developed algorithm establishes a close connection among a number of (seemingly unrelated) distributed algorithms, such as the EXTRA, the PG-EXTRA, the IC/IDC-ADMM, the DLM, and the classical distributed subgradient method.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available