4.6 Article

An acceleration of gradient descent algorithm with backtracking for unconstrained optimization

Journal

NUMERICAL ALGORITHMS
Volume 42, Issue 1, Pages 63-73

Publisher

SPRINGER
DOI: 10.1007/s11075-006-9023-9

Keywords

acceleration methods; backtracking; gradient descent methods

Ask authors/readers for more resources

In this paper we introduce an acceleration of gradient descent algorithm with backtracking. The idea is to modify the steplength t(k) by means of a positive parameter theta(k) , in a multiplicative manner, in such a way to improve the behaviour of the classical gradient algorithm. It is shown that the resulting algorithm remains linear convergent, but the reduction in function value is significantly improved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available