4.5 Article

Asymptotic properties for combined L1 and concave regularization

Journal

BIOMETRIKA
Volume 101, Issue 1, Pages 57-70

Publisher

OXFORD UNIV PRESS
DOI: 10.1093/biomet/ast047

Keywords

Concave penalty; Global optimum; Lasso penalty; Prediction; Variable selection

Funding

  1. U.S. National Science Foundation
  2. University of Southern California
  3. Division Of Mathematical Sciences
  4. Direct For Mathematical & Physical Scien [0955316] Funding Source: National Science Foundation
  5. Division Of Mathematical Sciences
  6. Direct For Mathematical & Physical Scien [1150318] Funding Source: National Science Foundation

Ask authors/readers for more resources

Two important goals of high-dimensional modelling are prediction and variable selection. In this article, we consider regularization with combined L-1 and concave penalties, and study the sampling properties of the global optimum of the suggested method in ultrahigh-dimensional settings. The L-1 penalty provides the minimum regularization needed for removing noise variables in order to achieve oracle prediction risk, while a concave penalty imposes additional regularization to control model sparsity. In the linear model setting, we prove that the global optimum of our method enjoys the same oracle inequalities as the lasso estimator and admits an explicit bound on the false sign rate, which can be asymptotically vanishing. Moreover, we establish oracle risk inequalities for the method and the sampling properties of computable solutions. Numerical studies suggest that our method yields more stable estimates than using a concave penalty alone.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available