4.7 Article

Stochastic configuration network ensembles with selective base models

Journal

NEURAL NETWORKS
Volume 137, Issue -, Pages 106-118

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.neunet.2021.01.011

Keywords

Stochastic configuration networks; Randomized learner models; Neural network ensemble; Educational data analytics

Funding

  1. Science and Technology Projects of Guangdong Province, China [2018B0101 09002]
  2. Science and Technology Project of Guangzhou Municipality, China [201904010393]
  3. National Natural Science Foundation of China [61802132]
  4. National Key R&D Program of China [2018AAA0100304]

Ask authors/readers for more resources

Studies have shown that stochastic configuration networks (SCNs) have potential for rapid data modeling with good test performance, however, more theoretical analysis is needed for enhanced generalization capacities. A novel framework is proposed for building SCN ensembles by selecting appropriate base models and utilizing a new algorithm to improve generalization performance.
Studies have demonstrated that stochastic configuration networks (SCNs) have good potential for rapid data modeling because of their sufficient adequate learning power, which is theoretically guaranteed. Empirical studies have verified that the learner models produced by SCNs can usually achieve favorable test performance in practice but more in-depth theoretical analysis of their generalization power would be useful for constructing SCN-based ensemble models with enhanced generalization capacities. In particular, given a collection of independently developed SCN-based learner models, it is useful to select certain base learners that can potentially obtain preferable test results rather than considering all of the base models together, before simply taking their average in order to build an effective ensemble model. In this study, we propose a novel framework for building SCN ensembles by exploring key factors that might potentially affect the generalization performance of the base model. Under a mild assumption, we provide a comprehensive theoretical framework for examining a learner model's generalization error, as well as formulating a novel indicator that contains measurement information for the training errors, output weights, and a hidden layer output matrix, which can be used by our proposed algorithm to find a subset of appropriate base models from a pool of randomized learner models. A toy example of one-dimensional function approximation, a case study for developing a predictive model for forecasting student learning performance, and two large-scale data sets were used in our experiments. The experimental results indicate that our proposed method has some remarkable advantages for building ensemble models. (C) 2021 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available