3.8 Article

The Horseshoe-Like Regularization for Feature Subset Selection

出版社

SPRINGER
DOI: 10.1007/s13571-019-00217-7

关键词

Bayes regularization; feature selection; horseshoe estimator; non-convex regularization; scale mixtures

资金

  1. US National Science Foundation [DMS-1613063]

向作者/读者索取更多资源

This paper introduces an alternative method for feature subset selection called the horseshoe regularization penalty, which shows superior theoretical and computational performance compared to existing methods. The distinguishing feature is the probabilistic representation of the penalty, enabling efficient optimization algorithms and uncertainty quantification.
Feature subset selection arises in many high-dimensional applications of statistics, such as compressed sensing and genomics. Thel(0)penalty is ideal for this task, the caveat being it requires the NP-hard combinatorial evaluation of all models. A recent area of considerable interest is to develop efficient algorithms to fit models with a non-convexl(gamma)penalty for gamma is an element of (0,1), which results in sparser models than the convexl(1)or lasso penalty, but is harder to fit. We propose an alternative, termed the horseshoe regularization penalty for feature subset selection, and demonstrate its theoretical and computational advantages. The distinguishing feature from existing non-convex optimization approaches is a full probabilistic representation of the penalty as the negative of the logarithm of a suitable prior, which in turn enables efficient expectation-maximization and local linear approximation algorithms for optimization and MCMC for uncertainty quantification. In synthetic and real data, the resulting algorithms provide better statistical performance, and the computation requires a fraction of time of state-of-the-art non-convex solvers.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据