4.7 Article

Feature selection in machine learning: an exact penalty approach using a Difference of Convex function Algorithm

期刊

MACHINE LEARNING
卷 101, 期 1-3, 页码 163-186

出版社

SPRINGER
DOI: 10.1007/s10994-014-5455-y

关键词

Zero-norm; Feature selection; Exact penalty; DC programming; DCA

向作者/读者索取更多资源

We develop an exact penalty approach for feature selection in machine learning via the zero-norm -regularization problem. Using a new result on exact penalty techniques we reformulate equivalently the original problem as a Difference of Convex (DC) functions program. This approach permits us to consider all the existing convex and nonconvex approximation approaches to treat the zero-norm in a unified view within DC programming and DCA framework. An efficient DCA scheme is investigated for the resulting DC program. The algorithm is implemented for feature selection in SVM, that requires solving one linear program at each iteration and enjoys interesting convergence properties. We perform an empirical comparison with some nonconvex approximation approaches, and show using several datasets from the UCI database/Challenging NIPS 2003 that the proposed algorithm is efficient in both feature selection and classification.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据