4.7 Article

Least learning machine and its experimental studies on regression capability

Journal

APPLIED SOFT COMPUTING
Volume 21, Issue -, Pages 677-684

Publisher

ELSEVIER
DOI: 10.1016/j.asoc.2014.04.001

Keywords

Feedforward neural network; Extreme learning machine; Hidden-feature-space ridge regression; Least learning machine

Funding

  1. Hong Kong Polytechnic University [1-ZV5V]
  2. National Natural Science Foundation of China [61272210, 61170122, 61300151]
  3. Fundamental Research Funds for the Central Universities [JUSRP111A38, JUSRP21128]
  4. Natural Science Foundation of Jiangsu province [BK2009067, BK2011417, BK20130155]

Ask authors/readers for more resources

Feedforward neural networks have been extensively used to approximate complex nonlinear mappings directly from the input samples. However, their traditional learning algorithms are usually much slower than required. In this work, two hidden-feature-space ridge regression methods HFSR and centered-ELM are first proposed for feedforward networks. As the special kernel methods, the important characteristics of both HFSR and centered-ELM are that rigorous Mercer's condition for kernel functions is not required and that they can inherently be used to propagate the prominent advantages of ELM into MLFN. Except for randomly assigned weights adopted in both ELM and HFSR, HFSR also exploits another randomness, i.e., randomly selected examplars from the training set for kernel activation functions. Through forward layer-by-layer data transformation, we can extend HFSR and Centered-ELM to MLFN. Accordingly, as the unified framework for HFSR and Centered-ELM, the least learning machine (LLM) is proposed for both SLFN and MLFN with a single or multiple outputs. LLM actually gives a new learning method for MLFN with keeping the same virtues of ELM only for SLFN, i.e., only the parameters in the last hidden layer require being adjusted, all the parameters in other hidden layers can be randomly assigned, and LLM is also much faster than BP for MLFN in training the sample sets. The experimental results clearly indicate the power of LLM on its application in nonlinear regression modeling. (C) 2014 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available