Journal
MATHEMATICAL PROGRAMMING
Volume 162, Issue 1-2, Pages 1-32Publisher
SPRINGER HEIDELBERG
DOI: 10.1007/s10107-016-1026-2
Keywords
Unconstrained optimization; Nonlinear optimization; Nonconvex optimization; Trust region methods; Global convergence; Local convergence; Worst-case iteration complexity; Worst-case evaluation complexity
Categories
Funding
- U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Applied Mathematics, Early Career Research Program [DE-SC0010615]
- U.S. National Science Foundation [DMS-1217153, DMS-1319356]
Ask authors/readers for more resources
We propose a trust region algorithm for solving nonconvex smooth optimization problems. For any , the algorithm requires at most iterations, function evaluations, and derivative evaluations to drive the norm of the gradient of the objective function below any . This improves upon the bound known to hold for some other trust region algorithms and matches the bound for the recently proposed Adaptive Regularisation framework using Cubics, also known as the arc algorithm. Our algorithm, entitled trace, follows a trust region framework, but employs modified step acceptance criteria and a novel trust region update mechanism that allow the algorithm to achieve such a worst-case global complexity bound. Importantly, we prove that our algorithm also attains global and fast local convergence guarantees under similar assumptions as for other trust region algorithms. We also prove a worst-case upper bound on the number of iterations, function evaluations, and derivative evaluations that the algorithm requires to obtain an approximate second-order stationary point.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available