4.7 Article

Fast Distributed Gradient Methods

Journal

IEEE TRANSACTIONS ON AUTOMATIC CONTROL
Volume 59, Issue 5, Pages 1131-1146

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TAC.2014.2298712

Keywords

Consensus; convergence rate; distributed optimization; Nesterov gradient

Funding

  1. Direct For Computer & Info Scie & Enginr
  2. Division of Computing and Communication Foundations [1011903] Funding Source: National Science Foundation

Ask authors/readers for more resources

We study distributed optimization problems when N nodes minimize the sum of their individual costs subject to a common vector variable. The costs are convex, have Lipschitz continuous gradient (with constant L), and bounded gradient. We propose two fast distributed gradient algorithms based on the centralized Nesterov gradient algorithm and establish their convergence rates in terms of the per-node communications /C and the per-node gradient evaluations k. Our first method, Distributed Nesterov Gradient, achieves rates O (log K/K) and O (log k/k). Our second method, Distributed Nesterov gradient with Consensus iterations, assumes at all nodes knowledge of L and.t(W) - the second largest singular value of the N x N doubly stochastic weight matrix W. It achieves rates O (1/k(2-epsilon)) and O (1/k(2)) (epsilon > 0 arbitrarily small). Further, we give for both methods explicit dependence of the convergence constants on N and W. Simulation examples illustrate our findings.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available