4.7 Article

A semi-supervised auto-encoder using label and sparse regularizations for classification

期刊

APPLIED SOFT COMPUTING
卷 77, 期 -, 页码 205-217

出版社

ELSEVIER
DOI: 10.1016/j.asoc.2019.01.021

关键词

Auto-encoder; Semi-supervised learning; Classification; ELM; DBN

资金

  1. National Natural Science Foundation of China [61673193]
  2. Fundamental Research Funds for the Central Universities, China [JUSRP51635B]
  3. China Postdoctoral Science Foundation [2017M621625]
  4. Natural Science Foundation of Jiangsu Province, China [BK20181341]

向作者/读者索取更多资源

The semi-supervised auto-encoder (SSAE) is a promising deep-learning method that integrates the advantages of unsupervised and supervised learning processes. The former learning process is designed to extract the underlying concepts of data as intrinsic information and enhance its generalization ability to express data. Furthermore, the supervised process tends to describe the rules of categorization with labels that further improve categorization accuracy. In this paper, we propose a novel semi-supervised learning method, namely, label and sparse regularization AE (LSRAE), by integrating label and sparse constraints to update the structure of the AE. The sparse regularization activates a minority of important neurons, while most of the other neurons are inhibited. Such a method ensures that LSRAE can yield a more local and informative structure of the data. Moreover, by implementing the label constraint, the supervised learning process can extract the features regulated by category rules and enhance the performance of the classifier in depth. To extensively test the performances of LSRAE, we perform our experiments on the benchmark datasets USPS, ISOLET and MNIST. The experimental results demonstrate the superiority of LSRAE in comparison with state-of-the-art feature extraction methods including AE, LSAE, SAE, ELM, DBN, and adaptive DBN. (C) 2019 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据