Journal
JOURNAL OF INDUSTRIAL AND MANAGEMENT OPTIMIZATION
Volume 1, Issue 2, Pages 171-180Publisher
AMER INST MATHEMATICAL SCIENCES-AIMS
DOI: 10.3934/jimo.2005.1.171
Keywords
nonsmooth convex optimization; Moreau-Yosida regularization; trust region method; BFGS method; strong convexity; inexact function and gradient evaluations
Ask authors/readers for more resources
We propose an iterative method that solves a nonsmooth convex optimization problem by converting the original objective function to a once continuously differentiable function by way of Moreau-Yosida regularization. The proposed method makes use of approximate function and gradient values of the Moreau-Yosida regularization instead of the corresponding exact values. Under this setting, Fukushima and Qi (1996) and Rauf and Fukushima (2000) proposed a proximal Newton method and a proximal BFGS method, respectively, for nonsmooth convex optimization. While these methods employ a line search strategy to achieve global convergence, the method proposed in this paper uses a trust region strategy. We establish global and superlinear convergence of the method under appropriate assumptions.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available