4.7 Article

Generating random weights and biases in feedforward neural networks with random hidden nodes

Journal

INFORMATION SCIENCES
Volume 481, Issue -, Pages 33-56

Publisher

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2018.12.063

Keywords

Activation functions; Function approximation; Feedforward neural networks; Neural networks with random hidden nodes; Randomized learning algorithms

Funding

  1. National Science Centre, Poland [2017/27/B/ST6/01804]

Ask authors/readers for more resources

Neural networks with random hidden nodes have gained increasing interest from researchers and practical applications. This is due to their unique features such as very fast training and universal approximation property. In these networks the weights and biases of hidden nodes determining the nonlinear feature mapping are set randomly and are not learned. Appropriate selection of the intervals from which weights and biases are selected is extremely important. This topic has not yet been sufficiently explored in the literature. In this work a method of generating random weights and biases is proposed. This method generates the parameters of the hidden nodes in such a way that nonlinear fragments of the activation functions are located in the input space regions with data and can be used to construct the surface approximating a nonlinear target function. The weights and biases are dependent on the input data range and activation function type. The proposed methods allows us to control the generalization degree of the model. These all lead to improvement in approximation performance of the network. Several experiments show very promising results. (C) 2018 Elsevier Inc. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available