4.7 Article

Approximate Orthogonal Sparse Embedding for Dimensionality Reduction

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2015.2422994

关键词

Dimensionality reduction; elastic net; image recognition; manifold learning; sparse projections

资金

  1. Hong Kong Polytechnic University, Hong Kong [G-YM53]
  2. National Natural Science Foundation of China [61203376, 61375012, 61071179, 61125305]
  3. Hi-Tech Research and Development Program of China [2006AA01Z119]
  4. Guangdong Natural Science Foundation [S2012040007289]
  5. Shenzhen Municipal Science and Technology Innovation Council [JCYJ20130329152024199]

向作者/读者索取更多资源

Locally linear embedding (LLE) is one of the most well-known manifold learning methods. As the representative linear extension of LLE, orthogonal neighborhood preserving projection (ONPP) has attracted widespread attention in the field of dimensionality reduction. In this paper, a unified sparse learning framework is proposed by introducing the sparsity or L-1-norm learning, which further extends the LLE-based methods to sparse cases. Theoretical connections between the ONPP and the proposed sparse linear embedding are discovered. The optimal sparse embeddings derived from the proposed framework can be computed by iterating the modified elastic net and singular value decomposition. We also show that the proposed model can be viewed as a general model for sparse linear and nonlinear (kernel) subspace learning. Based on this general model, sparse kernel embedding is also proposed for nonlinear sparse feature extraction. Extensive experiments on five databases demonstrate that the proposed sparse learning framework performs better than the existing subspace learning algorithm, particularly in the cases of small sample sizes.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据