4.6 Article

Iteratively Reweighted Least Squares Minimization for Sparse Recovery

期刊

出版社

WILEY
DOI: 10.1002/cpa.20303

关键词

-

资金

  1. National Science Foundation [DMS-0504924, DMS-0530865, DMS-0221642, DMS-0200187, CCF-0515187]
  2. Courant Institute
  3. Office of Naval Research [ONR-N00014-03-1-0051, ONR/DEPSCoR N00014-03-1-0675, ONR/DEPSCoR N00014-00-1-0470]
  4. Army Research Office [DAAD 19-02-1-0028]
  5. European Union via the Marie Curie Individual Fellowship [MOIF-CT-2006-039438]
  6. Program in Applied and Computational Mathematics at Princeton University
  7. Alfred P. Sloan Research Fellowship
  8. New York University Goddard Fellowship
  9. Austrian Science Fund (FWF) [Y 432] Funding Source: researchfish

向作者/读者索取更多资源

Under certain conditions (known as the restricted isometry property, or RIP) on the m x N matrix Phi (where m < N), vectors x is an element of R-N that arc sparse (i.e., have most of their entries equal to 0) can be recovered exactly from y := Phi x even though Phi(-1) (y) is typically an (N-m)-dimensional hyperplane; in addition x is then equal to the element in Phi(-1) (y) of minimal l(1)-norm. This minimal element can be identified via linear programming algorithms. We study an alternative method of determining x, as the limit of an iteratively reweighted least squares (IRLS) algorithm. The main step of this IRLS finds, for a given weight vector w, the element in Phi(-1) (y) with smallest l(2)(w)-norm. If x((n)) is the solution at iteration step n, then the new weight w((n)) is defined by w(i)((n)) := [|x(i)((n))|(2) + epsilon(2)(n)](-1/2), i = 1,...,N, for a decreasing sequence of adaptively defined epsilon(n); this updated weight is then used to obtain x((n+1)) and the process is repeated. We prove that when Phi satisfies the RIP conditions, the sequence x((n)) converges for all y, regardless of whether Phi(-1) (y) contains a sparse vector. If there is a sparse vector in Phi(-1) (y), then the limit is this sparse vector, and when x((n)) is sufficiently close to the limit, the remaining steps of the algorithm converge exponentially fast (linear convergence in the terminology of numerical optimization). The same algorithm with the heavier weight w(i)((n)) = [|x(i)((n))|(2) + epsilon(2)(n)](-1+tau/2), i = 1,...,N, where 0 < tau < 1, can recover sparse solutions as well; more importantly, we show its local convergence is superlinear and approaches a quadratic rate for tau approaching 0. (C) 2009 Wiley Periodicals, Inc.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据