4.7 Article

An Improved Convergence Analysis for Decentralized Online Stochastic Non-Convex Optimization

Journal

IEEE TRANSACTIONS ON SIGNAL PROCESSING
Volume 69, Issue -, Pages 1842-1858

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TSP.2021.3062553

Keywords

Convergence; Stochastic processes; Steady-state; Signal processing algorithms; Optimization; Transient analysis; Linear programming; Decentralized optimization; stochastic gradient methods; non-convex problems; multi-agent systems

Funding

  1. NSF [1903972, 1935555, 1513936]

Ask authors/readers for more resources

This paper investigates decentralized online stochastic non-convex optimization over a network of nodes. By integrating gradient tracking technique, the GT-DSGD algorithm is shown to have desirable characteristics towards minimizing a sum of smooth non-convex functions, achieving network-independent performances that match the centralized minibatch SGD.
In this paper, we study decentralized online stochastic non-convex optimization over a network of nodes. Integrating a technique called gradient tracking in decentralized stochastic gradient descent, we show that the resulting algorithm, GT-DSGD, enjoys certain desirable characteristics towards minimizing a sum of smooth non-convex functions. In particular, for general smooth non-convex functions, we establish non-asymptotic characterizations of GT-DSGD and derive the conditions under which it achieves network-independent performances that match the centralized minibatch SGD. In contrast, the existing results suggest that GT-DSGD is always network-dependent and is therefore strictly worse than the centralized minibatch SGD. When the global non-convex function additionally satisfies the Polyak-Lojasiewics (PL) condition, we establish the linear convergence of GT-DSGD up to a steady-state error with appropriate constant step-sizes. Moreover, under stochastic approximation step-sizes, we establish, for the first time, the optimal global sublinear convergence rate on almost every sample path, in addition to the asymptotically optimal sublinear rate in expectation. Since strongly convex functions are a special case of the functions satisfying the PL condition, our results are not only immediately applicable but also improve the currently known best convergence rates and their dependence on problem parameters.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available