4.6 Article

The Dantzig selector:: Statistical estimation when p is much larger than n

Journal

ANNALS OF STATISTICS
Volume 35, Issue 6, Pages 2313-2351

Publisher

INST MATHEMATICAL STATISTICS
DOI: 10.1214/009053606000001523

Keywords

statistical linear model; model selection; ideal estimation; oracle inequalities; sparse solutions to underdetermined systems; l(1)-minimization; linear programming; restricted orthonormality; geometry in high dimensions; random matrices

Ask authors/readers for more resources

In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y = X beta + z, where beta epsilon R-p is a parameter vector of interest, X is a data matrix with possibly far fewer rows than columns, n << p, and the z(i)'s are i.i.d. N(0, sigma(2)). Is it possible to estimate beta reliably based on the noisy data y? To estimate beta, we introduce a new estimator-we call it the Dantzig selector-which is a solution to the l(1)-regularization problem (min) ((beta) over tilde epsilon Rp) parallel to(beta) over tilde parallel to l(1) subject to parallel to X*r parallel to l(infinity) <= (1 + t(-1)) root 2 log p.sigma, where r is the residual vector y - X (beta) over tilde and t is a positive scalar. We show that if X obeys a uniform uncertainty principle (with unit-normed columns) and if the true parameter vector beta is sufficiently sparse (which here roughly guarantees that the model is identifiable), then with very large probability, parallel to(beta) over cap-beta parallel to(2)(l2) <= C-2 . 2 log p . (sigma(2) + Sigma(i) min(beta(2)(i), sigma(2))). Our results are nonasymptotic and we give values for the constant C. Even though n may be much smaller than p, our estimator achieves a loss within a logarithmic factor of the ideal mean squared error one would achieve with an oracle which would supply perfect information about which coordinates are nonzero, and which were above the noise level. In multivariate regression and from a model selection viewpoint, our result says that it is possible nearly to select the best subset of variables by solving a very simple convex program, which, in fact, can easily be recast as a convenient linear program (LP).

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available