4.4 Article

SLOPE-ADAPTIVE VARIABLE SELECTION VIA CONVEX OPTIMIZATION

Journal

ANNALS OF APPLIED STATISTICS
Volume 9, Issue 3, Pages 1103-1140

Publisher

INST MATHEMATICAL STATISTICS
DOI: 10.1214/15-AOAS842

Keywords

Sparse regression; variable selection; false discovery rate; Lasso; sorted l(1) penalized estimation (SLOPE)

Funding

  1. Fulbright Scholarship, NSF Grant [DMS-10-43204]
  2. European Union's 7th Framework Programme [602552]
  3. NSF [DMS-09-06812]
  4. NIH [HG006695, MH101782]
  5. General Wang Yaowu Stanford Graduate Fellowship
  6. AFOSR [FA9550-09-1-0643]
  7. ONR [N00014-09-1-0258]

Ask authors/readers for more resources

We introduce a new estimator for the vector of coefficients beta in the linear model y = X beta + z, where X has dimensions n x p with p possibly larger than n. SLOPE, short for Sorted L-One Penalized Estimation, is the solution to min(b is an element of Rp) 1/2 parallel to y-Xb parallel to(2)(l2)+lambda(1)vertical bar b vertical bar((1))+lambda(2)vertical bar b vertical bar((1))+...+lambda(p)vertical bar b vertical bar((p)) , where lambda (1)>= lambda(2) >= ... >= lambda(p) >= 0 and vertical bar b vertical bar((1)) >= vertical bar b vertical bar((2)) >= ... >=vertical bar b vertical bar((p)) are the decreasing absolute values of the entries of b. This is a convex program and we demonstrate a solution algorithm whose computational complexity is roughly comparable to that of classical l(1) procedures such as the Lasso. Here, the regularizer is a sorted l(1) norm, which penalizes the regression coefficients according to their rank: the higher the rank-that is, stronger the signal-the larger the penalty. This is similar to the Benjamini and Hochberg [J. Roy. Statist. Soc. Ser. B 57 (1995) 289-300] procedure (BH) which compares more significant p-values with more stringent thresholds. One notable choice of the sequence {lambda(i)} is given by the BH critical values.BH(i) = z(1 -i .q/2p), where q is an element of (0, 1) and z(alpha) is the quantile of a standard normal distribution. SLOPE aims to provide finite sample guarantees on the selected model; of special interest is the false discovery rate (FDR), defined as the expected proportion of irrelevant regressors among all selected predictors. Under orthogonal designs, SLOPE with lambda(BH) provably controls FDR at level q. Moreover, it also appears to have appreciable inferential properties under more general designs X while having substantial power, as demonstrated in a series of experiments running on both simulated and real data.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.4
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available