4.5 Article

Fast Best Subset Selection: Coordinate Descent and Local Combinatorial Optimization Algorithms

期刊

OPERATIONS RESEARCH
卷 68, 期 5, 页码 1517-1537

出版社

INFORMS
DOI: 10.1287/opre.2019.1919

关键词

interpretable machine learning; sparsity; Lasso; high-dimensional statistics; mixed integer programming; coordinate descent; large-scale computation

资金

  1. Office of Naval Research [ONRN000141512342, ONR-N000141812298]
  2. National Science Foundation [NSF-IIS-1718258]
  3. Massachusetts Institute of Technology

向作者/读者索取更多资源

The L-0-regularized least squares problem (a.k.a. best subsets) is central to sparse statistical learning and has attracted significant attention across the wider statistics, machine learning, and optimization communities. Recent work has shown that modern mixed integer optimization (MIO) solvers can be used to address small to moderate instances of this problem. In spite of the usefulness of L-0-based estimators and generic MIO solvers, there is a steep computational price to pay when compared with popular sparse learning algorithms (e.g., based on L-1 regularization). In this paper, we aim to push the frontiers of computation for a family of L-0-regularized problems with additional convex penalties. We propose a new hierarchy of necessary optimality conditions for these problems. We develop fast algorithms, based on coordinate descent and local combinatorial optimization, that are guaranteed to converge to solutions satisfying these optimality conditions. Froma statistical viewpoint, an interesting story emerges. When the signal strength is high, our combinatorial optimization algorithms have an edge in challenging statistical settings. When the signal is lower, pure L-0 benefits from additional convex regularization. We empirically demonstrate that our family of L-0-based estimators can outperform the state-of-the-art sparse learning algorithms in terms of a combination of prediction, estimation, and variable selection metrics under various regimes (e.g., different signal strengths, feature correlations, number of samples and features). Our new open-source sparse learning toolkit L0Learn (available on CRAN and GitHub) reaches up to a threefold speedup (with p up to 10(6)) when compared with competing toolkits such as glmnet and ncvreg.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据