4.6 Article

Random Search Enhancement of Incremental Regularized Multiple Hidden Layers ELM

Journal

IEEE ACCESS
Volume 7, Issue -, Pages 36866-36878

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2019.2905077

Keywords

Extreme learning machine; multiple hidden layers; incremental learning procedures; Cholesky decomposition; random search enhancement

Funding

  1. National Key Research and Development Program of China [2017YFB0304100]
  2. National Natural Science Foundation of China [71672032]
  3. Fundamental Research Funds for Central University [N180404012, N182608003]

Ask authors/readers for more resources

The extreme learning machine (ELM) represents one of the most successful approaches in the field of machine learning recently, especially in solving classification and regression problems. A key advantage of the multiple hidden layers' ELM (MELM) is that the computational time required to train the neural network is significantly lower because it uses random selection and analytical solution, respectively, to determine the weights of the hidden nodes and output nodes. However, due to the use of too many or too few hidden nodes during the training process, the phenomenon of over-fitting or under-fitting may occur in the prediction process. Aiming at the design of MELM neural network architecture, this paper applies the enhanced random search method to the MELM network model and proposes an incremental MELM training algorithm based on the Cholesky decomposition, namely, random search enhancement of incremental regularized MELM (EIR-MELM). The algorithm automatically determines the optimal MELM network structure by increasing the hidden nodes one by one and calculates the output weights by flexibly adopting the Cholesky decomposition method, which effectively reduces the computation burden caused by the incremental process of the hidden layer neurons. However, some hidden nodes added to the network may only have a weak influence on the final output of the network. Adding randomly generated nodes directly to the network only increases the complexity of the neural network structure. Therefore, in the process of adding hidden nodes, EIR-MELM adds a selection phase. According to the principle of structural risk minimization, the optimal node is selected from multiple randomly generated nodes to be added to the network, so that EIR-MELM has a more compact network structure. The experimental researches on the benchmark datasets for classification problems show that EIR-MELM can effectively determine the optimal MELM network structure automatically with high calculation efficiency.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available