4.6 Article

Augmented l(1) and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm

期刊

SIAM JOURNAL ON IMAGING SCIENCES
卷 6, 期 2, 页码 1059-1091

出版社

SIAM PUBLICATIONS
DOI: 10.1137/120863290

关键词

sparse optimization; global linear convergence; compressed sensing; low-rank matrix; matrix completion; exact regularization

资金

  1. NSF [DMS-0748839, ECCS-1028790]
  2. ONR grant [N00014-08-1-1101]
  3. ARL
  4. ARO [W911NF-09-1-0383]
  5. Div Of Electrical, Commun & Cyber Sys [1028790] Funding Source: National Science Foundation

向作者/读者索取更多资源

This paper studies the long-existing idea of adding a nice smooth function to smooth a nondifferentiable objective function in the context of sparse optimization, in particular, the minimization of parallel to x parallel to(1) + 1/2 alpha parallel to x parallel to(2)(2), where x is a vector, as well as the minimization of parallel to X parallel to(*) + 1/2 alpha parallel to X parallel to(2)(F), where X is a matrix and parallel to X parallel to(*) and parallel to X parallel to(F) are the nuclear and Frobenius norms of X, respectively. We show that they let sparse vectors and low-rank matrices be efficiently recovered. In particular, they enjoy exact and stable recovery guarantees similar to those known for the minimization of parallel to x parallel to(1) and parallel to X parallel to(*) under the conditions on the sensing operator such as its null-space property, restricted isometry property (RIP), spherical section property, or RIPless property. To recover a (nearly) sparse vector x(0), minimizing parallel to x parallel to(1) + 1/2 alpha parallel to x parallel to(2)(2) returns (nearly) the same solution as minimizing parallel to x parallel to(1) whenever alpha >= 10 parallel to x(0)parallel to(infinity). The same relation also holds between minimizing parallel to X parallel to(*) + 1/2 alpha parallel to X parallel to(2)(F) and minimizing parallel to X parallel to(*) for recovering a (nearly) low-rank matrix X-0 if alpha >= 10 parallel to X-0 parallel to 2. Furthermore, we show that the linearized Bregman algorithm, as well as its two fast variants, for minimizing parallel to x parallel to(1) + 1/2 alpha parallel to x parallel to(2)(2) subject to Ax = b enjoys global linear convergence as long as a nonzero solution exists, and we give an explicit rate of convergence. The convergence property does not require a sparse solution or any properties on A. To the best of our knowledge, this is the best known global convergence result for first-order sparse optimization algorithms.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据