4.7 Article

The generalized extreme learning machines: Tuning hyperparameters and limiting approach for the Moore-Penrose generalized inverse

期刊

NEURAL NETWORKS
卷 144, 期 -, 页码 591-602

出版社

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.neunet.2021.09.008

关键词

Generalized extreme learning machine; Multiple hidden layer feedforward neural networks; Universal approximation; Moore-Penrose generalized inverse; Output weight matrix; Attack prediction

资金

  1. Mid-career Research Program through the NRF - Korea government (MEST) [NRF-2019R1A2C1002706]
  2. National Science Foundation
  3. US Department of Homeland Security

向作者/读者索取更多资源

This paper introduces the Generalized Extreme Learning Machine (GELM) which incorporates analyzed hyperparameters and a limiting approach for the Moore-Penrose generalized inverse (M-P GI) into the learning process. Experimental results show the advantages of GELM in prediction performance and learning speed.
In this paper, we propose the generalized extreme learning machine (GELM). GELM is an ELM that incorporates the analyzed hyperparameters of ELM, such as sizes and ranks of weight matrices, and a limiting approach for the Moore-Penrose generalized inverse (M-P GI) into the learning process. ELM overcomes shortcomings of traditional deep learning, such as time-consuming due to iterative executions, as it learns quickly by removing the adjustment time of hyperparameters. There are desirable numbers of hidden nodes in ELM for single hidden layer feedforward neural networks, minimizing prediction error. However, it is difficult to use the desired number because it is related to the number of data used and datasets tend to be large. We consider ELM for multiple hidden layer feedforward neural networks. We analyze matrices derived in the network and figure out the characteristics of weight matrices and biases considering accurate prediction and learning speed, based on mathematical theories and a limiting approach for the M-P GI. The final output matrix of GELM is formulated explicitly. Experiments are conducted to verify the analysis using network traffic data, including DDoS attacks. The performances of GLEM, such as accuracies and learning speed, are compared for the networks with single and multiple hidden layers. Numerical results show the advantages of GELM in the performance measures, and the use of multiple hidden layers in GELM does not significantly affect performance. The theory-based prediction performances obtained from GELM will be the criterion for the margin of deep learning performance. (C) 2021 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据