4.6 Article

Ensembles of Biologically Inspired Optimization Algorithms for Training Multilayer Perceptron Neural Networks

Journal

APPLIED SCIENCES-BASEL
Volume 12, Issue 19, Pages -

Publisher

MDPI
DOI: 10.3390/app12199997

Keywords

ensembles; neural networks; optimization algorithms; neuroevolution

Funding

  1. UEFISCDI Romania [PN-III-P4ID-PCE-2020-0551, 91/2021]

Ask authors/readers for more resources

This paper utilizes various biologically inspired optimization algorithms to train multilayer perceptron neural networks and generate good regression models. By combining different optimization algorithms into a hybrid ensemble optimizer, the search capability is improved. Experimental results show that the neural networks generated by the hybrid multiple elite strategy are the most dependable regression models.
Artificial neural networks have proven to be effective in a wide range of fields, providing solutions to various problems. Training artificial neural networks using evolutionary algorithms is known as neuroevolution. The idea of finding not only the optimal weights and biases of a neural network but also its architecture has drawn the attention of many researchers. In this paper, we use different biologically inspired optimization algorithms to train multilayer perceptron neural networks for generating regression models. Specifically, our contribution involves analyzing and finding a strategy for combining several algorithms into a hybrid ensemble optimizer, which we apply for the optimization of a fully connected neural network. The goal is to obtain good regression models for studying and making predictions for the process of free radical polymerization of methyl methacrylate (MMA). In the first step, we use a search procedure to find the best parameter values for seven biologically inspired optimization algorithms. In the second step, we use a subset of the best-performing algorithms and improve the search capability by combining the chosen algorithms into an ensemble of optimizers. We propose three ensemble strategies that do not involve changes in the logic of optimization algorithms: hybrid cascade, hybrid single elite solution, and hybrid multiple elite solutions. The proposed strategies inherit the advantages of each individual optimizer and have faster convergence at a computational effort very similar to an individual optimizer. Our experimental results show that the hybrid multiple elite strategy ultimately produces neural networks which constitute the most dependable regression models for the aforementioned process.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available