4.6 Article

Incremental regularized extreme learning machine and it's enhancement

Journal

NEUROCOMPUTING
Volume 174, Issue -, Pages 134-142

Publisher

ELSEVIER
DOI: 10.1016/j.neucom.2015.01.097

Keywords

Extreme Learning Machine; Regularization; Incremental learning; Neural networks

Funding

  1. National key basic research and development program (973 program) [2013CB329504]
  2. Natural Science Fund of Zhejiang Province [Y1110152]

Ask authors/readers for more resources

Extreme Learning Machine (ELM) proposed by Huang et al. [2] is a novel algorithm for single hidden layer feedforward neural networks (SLFNs) with extremely fast learning speed and good generalization performance. When new hidden nodes are added to the existing network, retraining the network would be time consuming, and EM-ELM [13] was proposed to calculate the output weights incrementally. However there are still two issues in EM-ELM: first, the initial hidden layer output matrix may be rank deficient thus the computation will loss accuracy; second, EM-ELM cannot always get good generalization performance due to overfitting. So we propose the improved version of EM-ELM based on regularization method called Incremental Regularized Extreme Learning Machine (IR-ELM). When new hidden node is added one by one, IR-ELM can update output weights recursively in a fast way. Enhancement of IR-ELM (EIR-ELM) that has a selection of hidden nodes to be added to the network is also introduced in this paper. Empirical studies on benchmark data sets for regression and classification problems have shown that IR-ELM (EIR-ELM) always gets better generalization performance than EM-ELM with the similar training time. (C) 2015 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available