Journal
SIAM JOURNAL ON OPTIMIZATION
Volume 20, Issue 6, Pages 2833-2852Publisher
SIAM PUBLICATIONS
DOI: 10.1137/090774100
Keywords
nonlinear optimization; unconstrained optimization; steepest-descent method; Newton's method; trust-region methods; cubic regularization; global complexity bounds; global rate of convergence
Categories
Funding
- Royal Society [14265]
- EPSRC [EP/E053351/1, EP/G038643/1]
- Sciences et Technologies pour l'Aeronautique et l'Espace (STAE) Foundation (Toulouse, France) within the Reseau Thematique de Recherche Avancee (RTRA)
- EPSRC [EP/F005369/1] Funding Source: UKRI
- Engineering and Physical Sciences Research Council [EP/F005369/1, EP/E053351/1] Funding Source: researchfish
Ask authors/readers for more resources
It is shown that the steepest-descent and Newton's methods for unconstrained nonconvex optimization under standard assumptions may both require a number of iterations and function evaluations arbitrarily close to O(epsilon(-2)) to drive the norm of the gradient below epsilon. This shows that the upper bound of O(c(-2)) evaluations known for the steepest descent is tight and that Newton's method may be as slow as the steepest-descent method in the worst case. The improved evaluation complexity bound of O(epsilon(-3/2)) evaluations known for cubically regularized Newton's methods is also shown to be tight.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available