4.4 Article

Denoising deep extreme learning machine for sparse representation

Journal

MEMETIC COMPUTING
Volume 9, Issue 3, Pages 199-212

Publisher

SPRINGER HEIDELBERG
DOI: 10.1007/s12293-016-0185-2

Keywords

K-SVD; Extreme learning machine; Denoising; Deep ELM-AE; Representation learning

Funding

  1. National Key Project for Basic Research of China [2013CB329403]
  2. National Natural Science Foundation of China [61327809]
  3. National High-Tech Research and Development Plan [2015AA042306]
  4. Natural Science Foundation of Shanxi Province [2014011018-4]
  5. Shanxi Scholarship Council of China [2013-033, 2015-045]

Ask authors/readers for more resources

In recent years, a great deal of research has focused on the sparse representation for signal. Particularly, a dictionary learning algorithm, K-SVD, is introduced to efficiently learn an redundant dictionary from a set of training signals. Indeed, much progress has been made in different aspects. In addition, there is an interesting technique named extreme learning machine (ELM), which is an single-layer feed-forward neural networks (SLFNs) with a fast learning speed, good generalization and universal classification capability. In this paper, we propose an optimization method about K-SVD, which is an denoising deep extreme learning machines based on autoencoder (DDELM-AE) for sparse representation. In other words, we gain a new learned representation through the DDELM-AE and as the new input, it makes the conventional K-SVD algorithm perform better. To verify the classification performance of the new method, we conduct extensive experiments on real-world data sets. The performance of the deep models (i.e., Stacked Autoencoder) is comparable. The experimental results indicate the fact that our proposed method is very efficient in the sight of speed and accuracy.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.4
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available