4.7 Article

Convergence rate analysis of distributed optimization with projected subgradient algorithm

Journal

AUTOMATICA
Volume 83, Issue -, Pages 162-169

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.automatica.2017.06.011

Keywords

Distributed optimization; Subgradients; Convergence rate; Multi-agent systems

Funding

  1. Qilu Youth Scholar Discipline Construction Funding from Shandong University
  2. National Natural Science Foundation of China (NSFC) [61304045, 61573220, 61633014]

Ask authors/readers for more resources

In this paper, we revisit the consensus-based projected subgradient algorithm proposed for a common set constraint. We show that the commonly adopted non-summable and square-summable diminishing step sizes of subgradients can be relaxed to be only non-summable, if the constrained optimum set is bounded. More importantly, for a strongly convex aggregate cost with different types of step sizes, we provide a systematical analysis to derive the asymptotic upper bound of convergence rates in terms of the optimum residual, and select the best step sizes accordingly. Our result shows that a convergence rate of 0(1/root k) can be achieved with a step size 0(1/k). (C) 2017 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available