4.6 Article

Augmented l(1) and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm

Journal

SIAM JOURNAL ON IMAGING SCIENCES
Volume 6, Issue 2, Pages 1059-1091

Publisher

SIAM PUBLICATIONS
DOI: 10.1137/120863290

Keywords

sparse optimization; global linear convergence; compressed sensing; low-rank matrix; matrix completion; exact regularization

Funding

  1. NSF [DMS-0748839, ECCS-1028790]
  2. ONR grant [N00014-08-1-1101]
  3. ARL
  4. ARO [W911NF-09-1-0383]
  5. Div Of Electrical, Commun & Cyber Sys [1028790] Funding Source: National Science Foundation

Ask authors/readers for more resources

This paper studies the long-existing idea of adding a nice smooth function to smooth a nondifferentiable objective function in the context of sparse optimization, in particular, the minimization of parallel to x parallel to(1) + 1/2 alpha parallel to x parallel to(2)(2), where x is a vector, as well as the minimization of parallel to X parallel to(*) + 1/2 alpha parallel to X parallel to(2)(F), where X is a matrix and parallel to X parallel to(*) and parallel to X parallel to(F) are the nuclear and Frobenius norms of X, respectively. We show that they let sparse vectors and low-rank matrices be efficiently recovered. In particular, they enjoy exact and stable recovery guarantees similar to those known for the minimization of parallel to x parallel to(1) and parallel to X parallel to(*) under the conditions on the sensing operator such as its null-space property, restricted isometry property (RIP), spherical section property, or RIPless property. To recover a (nearly) sparse vector x(0), minimizing parallel to x parallel to(1) + 1/2 alpha parallel to x parallel to(2)(2) returns (nearly) the same solution as minimizing parallel to x parallel to(1) whenever alpha >= 10 parallel to x(0)parallel to(infinity). The same relation also holds between minimizing parallel to X parallel to(*) + 1/2 alpha parallel to X parallel to(2)(F) and minimizing parallel to X parallel to(*) for recovering a (nearly) low-rank matrix X-0 if alpha >= 10 parallel to X-0 parallel to 2. Furthermore, we show that the linearized Bregman algorithm, as well as its two fast variants, for minimizing parallel to x parallel to(1) + 1/2 alpha parallel to x parallel to(2)(2) subject to Ax = b enjoys global linear convergence as long as a nonzero solution exists, and we give an explicit rate of convergence. The convergence property does not require a sparse solution or any properties on A. To the best of our knowledge, this is the best known global convergence result for first-order sparse optimization algorithms.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available