4.6 Article

LMAE: A large margin Auto-Encoders for classification

Journal

SIGNAL PROCESSING
Volume 141, Issue -, Pages 137-143

Publisher

ELSEVIER
DOI: 10.1016/j.sigpro.2017.05.030

Keywords

Auto-Encoder; Large Margin; kNN; Classification

Funding

  1. National Natural Science Foundation of China [61671480, 61572486]
  2. Fundamental Research Funds for the Central Universities
  3. China University of Petroleum (East China) [14CX02203A]
  4. Yunnan Natural Science Funds [2016FB105]

Ask authors/readers for more resources

Auto-Encoders, as one representative deep learning method, has demonstrated to achieve superior performance in many applications. Hence, it is drawing more and more attentions and variants of Auto Encoders have been reported including Contractive Auto-Encoders, Denoising Auto-Encoders, Sparse Auto Encoders and Nonnegativity Constraints Auto-Encoders. Recently, a Discriminative Auto-Encoders is reported to improve the performance by considering the within class and between class information. In this paper, we propose the Large Margin Auto-Encoders (LMAE) to further boost the discriminability by enforcing different class samples to be large marginally distributed in hidden feature space. Particularly, we stack the single-layer LMAE to construct a deep neural network to learn proper features. And finally we put these features into a softmax classifier for classification. Extensive experiments are conducted on the MNIST dataset and the CIFAR-10 dataset for classification respectively. The experimental results demonstrate that the proposed LMAE outperforms the traditional Auto-Encoders algorithm. (C) 2017 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available