4.6 Article

Rotational Invariant Dimensionality Reduction Algorithms

期刊

IEEE TRANSACTIONS ON CYBERNETICS
卷 47, 期 11, 页码 3733-3746

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCYB.2016.2578642

关键词

Dimensionality reduction; image classification; image feature extraction; rotational invariant (RI) subspace learning

资金

  1. Natural Science Foundation of China [61573248, 61203376, 61375012, 61272050, 61362031, 61332011, 61370163]
  2. Research Grants Council of Hong Kong [531708]
  3. Science Foundation of Guangdong Province [2014A030313556]
  4. Shenzhen Municipal Science and Technology Innovation Council [JCYJ20150324141711637]

向作者/读者索取更多资源

A common intrinsic limitation of the traditional sub-space learning methods is the sensitivity to the outliers and the image variations of the object since they use the L-2 norm as the metric. In this paper, a series of methods based on the L-2,L-1-norm are proposed for linear dimensionality reduction. Since the L-2,L-1-norm based objective function is robust to the image variations, the proposed algorithms can perform robust image feature extraction for classification. We use different ideas to design different algorithms and obtain a unified rotational invariant (RI) dimensionality reduction framework, which extends the well-known graph embedding algorithm framework to a more generalized form. We provide the comprehensive analyses to show the essential properties of the proposed algorithm framework. This paper indicates that the optimization problems have global optimal solutions when all the orthogonal projections of the data space are computed and used. Experimental results on popular image datasets indicate that the proposed RI dimensionality reduction algorithms can obtain competitive performance compared with the previous L-2 norm based sub-space learning algorithms.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据