4.4 Article

Denoising deep extreme learning machine for sparse representation

期刊

MEMETIC COMPUTING
卷 9, 期 3, 页码 199-212

出版社

SPRINGER HEIDELBERG
DOI: 10.1007/s12293-016-0185-2

关键词

K-SVD; Extreme learning machine; Denoising; Deep ELM-AE; Representation learning

资金

  1. National Key Project for Basic Research of China [2013CB329403]
  2. National Natural Science Foundation of China [61327809]
  3. National High-Tech Research and Development Plan [2015AA042306]
  4. Natural Science Foundation of Shanxi Province [2014011018-4]
  5. Shanxi Scholarship Council of China [2013-033, 2015-045]

向作者/读者索取更多资源

In recent years, a great deal of research has focused on the sparse representation for signal. Particularly, a dictionary learning algorithm, K-SVD, is introduced to efficiently learn an redundant dictionary from a set of training signals. Indeed, much progress has been made in different aspects. In addition, there is an interesting technique named extreme learning machine (ELM), which is an single-layer feed-forward neural networks (SLFNs) with a fast learning speed, good generalization and universal classification capability. In this paper, we propose an optimization method about K-SVD, which is an denoising deep extreme learning machines based on autoencoder (DDELM-AE) for sparse representation. In other words, we gain a new learned representation through the DDELM-AE and as the new input, it makes the conventional K-SVD algorithm perform better. To verify the classification performance of the new method, we conduct extensive experiments on real-world data sets. The performance of the deep models (i.e., Stacked Autoencoder) is comparable. The experimental results indicate the fact that our proposed method is very efficient in the sight of speed and accuracy.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.4
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据