4.7 Article

Large Margin Subspace Learning for feature selection

期刊

PATTERN RECOGNITION
卷 46, 期 10, 页码 2798-2806

出版社

ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2013.02.012

关键词

Feature selection; l(2,1)-norm regularization; Large margin maximization; Subspace learning

资金

  1. Program for Natural Science Foundation of China [61173129]
  2. Specialized Research Fund for the Doctoral Program of Higher Education of China [20120191110026]
  3. Fundamental Research Funds for the Central Universities [CDJXS11182240]
  4. National Natural Science Foundation of China [60970034, 61170287]
  5. Foundation for the Author of National Excellent Doctoral Dissertations [2007B4]
  6. Foundation for the Author of Hunan Provincial Excellent Doctoral Dissertations
  7. Chongqing Municipal Education Commission, Chongqing, China [KJ120723, KJ110730]

向作者/读者索取更多资源

Recent research has shown the benefits of large margin framework for feature selection. In this paper, we propose a novel feature selection algorithm, termed as Large Margin Subspace Learning (LMSL), which seeks a projection matrix to maximize the margin of a given sample, defined as the distance between the nearest missing (the nearest neighbor with the different label) and the nearest hit (the nearest neighbor with the same label) of the given sample. Instead of calculating the nearest neighbor of the given sample directly, we treat each sample with different (same) labels with the given sample as a potential nearest missing (hint), with the probability estimated by kernel density estimation. By this way, the nearest missing (hint) is calculated as an expectation of all different (same) class samples. In order to perform feature selection, an l(2,1)-norm is imposed on the projection matrix to enforce row-sparsity. An efficient algorithm is then proposed to solve the resultant optimization problem. Comprehensive experiments are conducted to compare the performance of the proposed algorithm with the other five state-of-the-art algorithms RFS, SPFS, mRMR, TR and LLFS, it achieves better performance than the former four. Compared with the algorithm LLFS, the proposed algorithm has a competitive performance with however a significantly faster computational. (C) 2013 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据