Journal
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Volume 34, Issue 1, Pages 187-193Publisher
IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2011.177
Keywords
Stability; sparsity; Lasso; regularization
Funding
- NUS [R-265-000-384-133]
- US National Science Foundation (NSF) [EFRI-0735905, CNS-0721532, CNS-0831580]
- DTRA [HDTRA1-08-0029]
- Israel Science Foundation [890015]
- Directorate For Engineering [1056028] Funding Source: National Science Foundation
Ask authors/readers for more resources
We consider two desired properties of learning algorithms: sparsity and algorithmic stability. Both properties are believed to lead to good generalization ability. We show that these two properties are fundamentally at odds with each other: A sparse algorithm cannot be stable and vice versa. Thus, one has to trade off sparsity and stability in designing a learning algorithm. In particular, our general result implies that l(1)-regularized regression (Lasso) cannot be stable, while l(2)-regularized regression is known to have strong stability properties and is therefore not sparse.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available