4.6 Article

Regularization incremental extreme learning machine with random reduced kernel for regression

期刊

NEUROCOMPUTING
卷 321, 期 -, 页码 72-81

出版社

ELSEVIER SCIENCE BV
DOI: 10.1016/j.neucom.2018.08.082

关键词

Regularization; Random reduced kernel; Incremental extreme learning machine; Regression

资金

  1. Zhejiang Provincial Natural Science Foundation of China [LY18F030018]
  2. Natural Science Foundation of China [51376055]

向作者/读者索取更多资源

For regression tasks, the existing extreme learning machine (ELM) and kernel extreme learning machine (KELM) algorithms exhibit singularity and over-fitting problems when the number of training samples is less than the number of hidden layer neurons. To overcome these shortcomings, this paper introduces random reduction kernel and regularization parameters, and the regularization incremental extreme learning machine with random reduced kernel (RKRIELM) algorithm is proposed. RKRIELM combines the kernel function and incremental extreme learning machine (I-ELM) to avoid randomness, thereby solving the singularity problem when the number of initial training samples of the ELM is less than the number of hidden layer neurons. Moreover, it uses the number of hidden layer neurons as the precondition for the loop ending of the training algorithm. Additionally, the regularization parameter is used to reduce the risk of over-fitting. Regression experiments were conducted for evaluating the proposed method, ELM, KELM, reduced kernel extreme learning machine, rotation forest selective ensemble extreme learning machine, reduced support vector regression, and gray wolf optimization support vector regression with standard data sets. The results indicate that the proposed method has a lower prediction error and better training efficiency than the other algorithms in most cases. (C) 2018 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据