4.6 Article

A convergent incremental gradient method with a constant step size

Journal

SIAM JOURNAL ON OPTIMIZATION
Volume 18, Issue 1, Pages 29-51

Publisher

SIAM PUBLICATIONS
DOI: 10.1137/040615961

Keywords

incremental gradient method; convergence analysis; sensor networks; neural networks; logistic regression; boosting

Ask authors/readers for more resources

An incremental aggregated gradient method for minimizing a sum of continuously differentiable functions is presented. The method requires a single gradient evaluation per iteration and uses a constant step size. For the case that the gradient is bounded and Lipschitz continuous, we show that the method visits infinitely often regions in which the gradient is small. Under certain unimodality assumptions, global convergence is established. In the quadratic case, a global linear rate of convergence is shown. The method is applied to distributed optimization problems arising in wireless sensor networks, and numerical experiments compare the new method with other incremental gradient methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available