4.6 Article

Learning a good representation with unsymmetrical auto-encoder

期刊

NEURAL COMPUTING & APPLICATIONS
卷 27, 期 5, 页码 1361-1367

出版社

SPRINGER LONDON LTD
DOI: 10.1007/s00521-015-1939-3

关键词

Auto-encoder; Neural networks; Feature learning; Deep learning; Unsupervised learning

资金

  1. National Science Foundation of China [61432012]

向作者/读者索取更多资源

Auto-encoders play a fundamental role in unsupervised feature learning and learning initial parameters of deep architectures for supervised tasks. For given input samples, robust features are used to generate robust representations from two perspectives: (1) invariant to small variation of samples and (2) reconstruction by decoders with minimal error. Traditional auto-encoders with different regularization terms have symmetrical numbers of encoder and decoder layers, and sometimes parameters. We investigate the relation between the number of layers and propose an unsymmetrical structure, i.e., an unsymmetrical auto-encoder (UAE), to learn more effective features. We present empirical results of feature learning using the UAE and state-of-the-art auto-encoders for classification tasks with a range of datasets. We also analyze the gradient vanishing problem mathematically and provide suggestions for the appropriate number of layers to use in UAEs with a logistic activation function. In our experiments, UAEs demonstrated superior performance with the same configuration compared to other autoencoders.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据