4.7 Article

Optimal distributed stochastic mirror descent for strongly convex optimization

Journal

AUTOMATICA
Volume 90, Issue -, Pages 196-203

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.automatica.2017.12.053

Keywords

Distributed stochastic optimization; Strong convexity; Non-Euclidean divergence; Mirror descent; Epoch gradient descent; Optimal convergence rate

Funding

  1. Natural Science Fund for Excellent Young Scholars of Jiangsu Province [BK20170099]
  2. National Natural Science Foundation of China [61573344, 61733018, 61374180]
  3. Research Grants Council of the Hong Kong Special Administrative Region, China [CityU 11300415]

Ask authors/readers for more resources

In this paper we consider convergence rate problems for stochastic strongly-convex optimization in the non-Euclidean sense with a constraint set over a time-varying multi-agent network. We propose two efficient non-Euclidean stochastic subgradient descent algorithms based on the Bregman divergence as distance-measuring function rather than the Euclidean distances that were employed by the standard distributed stochastic projected subgradient algorithms. For distributed optimization of non-smooth and strongly convex functions whose only stochastic subgradients are available, the first algorithm recovers the best previous known rate of O(ln(T)/T) (where T is the total number of iterations). The second algorithm is an epoch variant of the first algorithm that attains the optimal convergence rate of O(1/T), matching that of the best previously known centralized stochastic subgradient algorithm. Finally, we report some simulation results to illustrate the proposed algorithms. (C) 2018 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available