4.6 Article

LMAE: A large margin Auto-Encoders for classification

期刊

SIGNAL PROCESSING
卷 141, 期 -, 页码 137-143

出版社

ELSEVIER
DOI: 10.1016/j.sigpro.2017.05.030

关键词

Auto-Encoder; Large Margin; kNN; Classification

资金

  1. National Natural Science Foundation of China [61671480, 61572486]
  2. Fundamental Research Funds for the Central Universities
  3. China University of Petroleum (East China) [14CX02203A]
  4. Yunnan Natural Science Funds [2016FB105]

向作者/读者索取更多资源

Auto-Encoders, as one representative deep learning method, has demonstrated to achieve superior performance in many applications. Hence, it is drawing more and more attentions and variants of Auto Encoders have been reported including Contractive Auto-Encoders, Denoising Auto-Encoders, Sparse Auto Encoders and Nonnegativity Constraints Auto-Encoders. Recently, a Discriminative Auto-Encoders is reported to improve the performance by considering the within class and between class information. In this paper, we propose the Large Margin Auto-Encoders (LMAE) to further boost the discriminability by enforcing different class samples to be large marginally distributed in hidden feature space. Particularly, we stack the single-layer LMAE to construct a deep neural network to learn proper features. And finally we put these features into a softmax classifier for classification. Extensive experiments are conducted on the MNIST dataset and the CIFAR-10 dataset for classification respectively. The experimental results demonstrate that the proposed LMAE outperforms the traditional Auto-Encoders algorithm. (C) 2017 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据