4.6 Article

ON THE COMPLEXITY OF STEEPEST DESCENT, NEWTON'S AND REGULARIZED NEWTON'S METHODS FOR NONCONVEX UNCONSTRAINED OPTIMIZATION PROBLEMS

Journal

SIAM JOURNAL ON OPTIMIZATION
Volume 20, Issue 6, Pages 2833-2852

Publisher

SIAM PUBLICATIONS
DOI: 10.1137/090774100

Keywords

nonlinear optimization; unconstrained optimization; steepest-descent method; Newton's method; trust-region methods; cubic regularization; global complexity bounds; global rate of convergence

Funding

  1. Royal Society [14265]
  2. EPSRC [EP/E053351/1, EP/G038643/1]
  3. Sciences et Technologies pour l'Aeronautique et l'Espace (STAE) Foundation (Toulouse, France) within the Reseau Thematique de Recherche Avancee (RTRA)
  4. EPSRC [EP/F005369/1] Funding Source: UKRI
  5. Engineering and Physical Sciences Research Council [EP/F005369/1, EP/E053351/1] Funding Source: researchfish

Ask authors/readers for more resources

It is shown that the steepest-descent and Newton's methods for unconstrained nonconvex optimization under standard assumptions may both require a number of iterations and function evaluations arbitrarily close to O(epsilon(-2)) to drive the norm of the gradient below epsilon. This shows that the upper bound of O(c(-2)) evaluations known for the steepest descent is tight and that Newton's method may be as slow as the steepest-descent method in the worst case. The improved evaluation complexity bound of O(epsilon(-3/2)) evaluations known for cubically regularized Newton's methods is also shown to be tight.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available