期刊
MATHEMATICAL PROGRAMMING
卷 92, 期 2, 页码 197-235出版社
SPRINGER-VERLAG
DOI: 10.1007/s101070100293
关键词
smoothing technique; nonlinear rescaling; multipliers method; Interior Prox method; Log-Sigmoid transformation; duality; Fermi-Dirac Entropy function
We introduce an alternative to the smoothing technique approach for constrained optimization. As it turns out for an given smoothing function there exists a modification with particular properties. We use the modification for Nonlinear Rescaling (NR) the constraints of a given constrained optimization problem into an equivalent set of constraints. The constraints transformation is scaled by a hector of positive parameters. The Lagrangian for the equivalent problems is to the correspondent Smoothing Penalty functions as Augmented Lagrangian to the Classical Penalty function or MBFs to the Barrier Functions. Moreover the Lagrangians for the equivalent problems combine the best properties of Quadratic and Nonquadratic Augmented Lagrangians and at the same time are free from their main drawbacks. Sequential unconstrained minimization of the Lagrangian for the equivalent problem in primal space followed by both Lagrange multipliers and scaling parameters update leads to a new class of NR multipliers methods, which are equivalent to the Interior Quadratic Prox methods for the dual problem. We proved convergence and estimate the rate of convergence of the NR multipliers method under very mild assumptions on the input data. We also estimate the rate of convergence under various assumptions on the input data. In particular, under the standard second order optimality conditions the NR method converges with Q-linear rate without unbounded increase of the scaling parameters. which correspond to the active constraints. We also established global quadratic convergence of the NR methods for Linear Programming with unique dual solution. We provide numerical results. which strongly support the theory.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据