4.6 Article

Global Convergence of ADMM in Nonconvex Nonsmooth Optimization

期刊

JOURNAL OF SCIENTIFIC COMPUTING
卷 78, 期 1, 页码 29-63

出版社

SPRINGER/PLENUM PUBLISHERS
DOI: 10.1007/s10915-018-0757-z

关键词

ADMM; Nonconvex optimization; Augmented Lagrangian method; Block coordinate descent; Sparse optimization

资金

  1. NSF [DMS-1720237, ECCS-1462397]
  2. ONR [N00014171216]
  3. NSFC [61603162, 11501440, 61772246, 61603163]
  4. doctoral start-up foundation of Jiangxi Normal Univerity

向作者/读者索取更多资源

In this paper, we analyze the convergence of the alternating direction method of multipliers (ADMM) for minimizing a nonconvex and possibly nonsmooth objective function, phi(x0,...,xp,y), subject to coupled linear equality constraints. Our ADMM updates each of the primal variables x0,...,xp,y, followed by updating the dual variable. We separate the variable y from xi's as it has a special role in our analysis. The developed convergence guarantee covers a variety of nonconvex functions such as piecewise linear functions, q quasi-norm, Schatten-q quasi-norm (0<1), minimax concave penalty (MCP), and smoothly clipped absolute deviation penalty. It also allows nonconvex constraints such as compact manifolds (e.g., spherical, Stiefel, and Grassman manifolds) and linear complementarity constraints. Also, the x0-block can be almost any lower semi-continuous function. By applying our analysis, we show, for the first time, that several ADMM algorithms applied to solve nonconvex models in statistical learning, optimization on manifold, and matrix decomposition are guaranteed to converge. Our results provide sufficient conditions for ADMM to converge on (convex or nonconvex) monotropic programs with three or more blocks, as they are special cases of our model. ADMM has been regarded as a variant to the augmented Lagrangian method (ALM). We present a simple example to illustrate how ADMM converges but ALM diverges with bounded penalty parameter . Indicated by this example and other analysis in this paper, ADMM might be a better choice than ALM for some nonconvex nonsmooth problems, because ADMM is not only easier to implement, it is also more likely to converge for the concerned scenarios.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据