4.2 Article

Non-negative least squares for high-dimensional linear models: Consistency and sparse recovery without regularization

Journal

ELECTRONIC JOURNAL OF STATISTICS
Volume 7, Issue -, Pages 3004-3056

Publisher

INST MATHEMATICAL STATISTICS-IMS
DOI: 10.1214/13-EJS868

Keywords

Convex geometry; deconvolution; high dimensions; non-negativity constraints; persistence; random matrices; separating hyperplane; sparse recovery

Funding

  1. Cluster of Excellence 'Multimodal Computing and Interaction' (MMCI) of Deutsche Forschungsgemeinschaft (DFG)

Ask authors/readers for more resources

Least squares fitting is in general not useful for high-dimensional linear models,in which the number of predictors is of the same or even larger order of magnitude than the number of samples. Theory developed in recent years has coined a paradigm according to which sparsity-promoting regularization is regarded as a necessity in such setting. Deviating from this paradigm, we show that non-negativity constraints on the regression coefficients may be similarly effective as explicit regularization if the design matrix has additional properties, which are met in several applications of non-negative least squares (NNLS). We show that for these designs, the performance of NNLS with regard to prediction and estimation is comparable to that of the lasso. We argue further that in specific cases, NNLS may have a better l infinity-rate in estimation and hence also advantages with respect to support recovery when combined with thresholding. From a practical point of view, NNLS does not depend on a regularization parameter and is hence easier to use.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.2
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available