4.6 Article

Numerical Computation of Partial Differential Equations by Hidden-Layer Concatenated Extreme Learning Machine

期刊

JOURNAL OF SCIENTIFIC COMPUTING
卷 95, 期 2, 页码 -

出版社

SPRINGER/PLENUM PUBLISHERS
DOI: 10.1007/s10915-023-02162-0

关键词

Extreme learning machine; Hidden layer concatenation; Random weight neural networks; Least squares; Scientific machine learning; Random basis

向作者/读者索取更多资源

This paper introduces a modified ELM method, called HLConcELM, which can produce highly accurate solutions to linear/nonlinear PDEs when the last hidden layer of the network is narrow and when it is wide. The method is based on a modified feedforward neural network (FNN), termed HLConcFNN, which incorporates a logical concatenation of the hidden layers in the network and exposes all the hidden nodes to the output-layer nodes. The HLConcFNNs have the interesting property that the representation capacity of the network is guaranteed to be not smaller than that of the original network architecture, even with additional hidden layers or nodes.
Extreme learning machine (ELM) is a type of randomized neural networks originally developed for linear classification and regression problems in the mid-2000s, and has recently been extended to computational partial differential equations (PDE). This method can yield highly accurate solutions to linear/nonlinear PDEs, but requires the last hidden layer of the neural network to be wide to achieve a high accuracy. If the last hidden layer is narrow, the accuracy of the existing ELM method will be poor, irrespective of the rest of the network configuration. In this paper we present a modified ELM method, termed HLConcELM (hidden-layer concatenated ELM), to overcome the above drawback of the conventional ELM method. The HLConcELM method can produce highly accurate solutions to linear/nonlinear PDEs when the last hidden layer of the network is narrow and when it is wide. The new method is based on a type of modified feedforward neural networks (FNN), termed HLConcFNN (hidden-layer concatenated FNN), which incorporates a logical concatenation of the hidden layers in the network and exposes all the hidden nodes to the output-layer nodes. HLConcFNNs have the interesting property that, given a network architecture, when additional hidden layers are appended to the network or when extra nodes are added to the existing hidden layers, the representation capacity of the HLConcFNN associated with the new architecture is guaranteed to be not smaller than that of the original network architecture. Here representation capacity refers to the set of all functions that can be exactly represented by the neural network of a given architecture. We present ample benchmark tests with linear/nonlinear PDEs to demonstrate the computational accuracy and performance of the HLConcELM method and the superiority of this method to the conventional ELM from previous works.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据