4.8 Article

Sparse Algorithms Are Not Stable: A No-Free-Lunch Theorem

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2011.177

Keywords

Stability; sparsity; Lasso; regularization

Funding

  1. NUS [R-265-000-384-133]
  2. US National Science Foundation (NSF) [EFRI-0735905, CNS-0721532, CNS-0831580]
  3. DTRA [HDTRA1-08-0029]
  4. Israel Science Foundation [890015]
  5. Directorate For Engineering [1056028] Funding Source: National Science Foundation

Ask authors/readers for more resources

We consider two desired properties of learning algorithms: sparsity and algorithmic stability. Both properties are believed to lead to good generalization ability. We show that these two properties are fundamentally at odds with each other: A sparse algorithm cannot be stable and vice versa. Thus, one has to trade off sparsity and stability in designing a learning algorithm. In particular, our general result implies that l(1)-regularized regression (Lasso) cannot be stable, while l(2)-regularized regression is known to have strong stability properties and is therefore not sparse.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available