4.6 Article

Gradient convergence in gradient methods with errors

Journal

SIAM JOURNAL ON OPTIMIZATION
Volume 10, Issue 3, Pages 627-642

Publisher

SIAM PUBLICATIONS
DOI: 10.1137/S1052623497331063

Keywords

gradient methods; incremental gradient methods; stochastic approximation; gradient convergence

Ask authors/readers for more resources

We consider the gradient method x(t+1) = x(t) + gamma(t)(s(t) + w(t)), where s(t) is a descent direction of a function f : R-n --> R and w(t) is a deterministic or stochastic error. We assume that del f is Lipschitz continuous, that the stepsize gamma(t) diminishes to 0, and that s(t) and w(t) satisfy standard conditions. We show that either f(x(t)) --> -infinity or f(x(t)) converges to a finite value and del f(x(t)) --> 0 (with probability 1 in the stochastic case), and in doing so, we remove various boundedness conditions that are assumed in existing results, such as boundedness from below of f, boundedness of del f(x(t)), or boundedness of x(t).

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available