4.7 Article

Multi-objective evolutionary optimization of polynomial neural networks for modelling and prediction of explosive cutting process

Journal

ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE
Volume 22, Issue 4-5, Pages 676-687

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.engappai.2008.11.005

Keywords

Explosive cutting; Multi-objective optimization; Genetic algorithms; GMDH; Pareto

Ask authors/readers for more resources

In this paper, evolutionary algorithms (EAs) are deployed for multi-objective Pareto optimal design of group method of data handling (GMDH)-type neural networks which have been used for modelling an explosive cutting process using some input-output experimental data. In this way, multi-objective EAs (non-dominated sorting genetic algorithm, NSGA-II) with a new diversity-preserving mechanism are used for Pareto optimization of such GMDH-type neural networks. The important conflicting objectives of GMDH-type neural networks that are considered in this work are, namely. training error (TE), prediction error (PE), and number of neurons (N) of such neural networks. Different pairs of theses objective functions are selected for 2-objective optimization processes. Therefore, optimal Pareto fronts of such models are obtained in each case which exhibit the trade-off between the corresponding pair of conflicting objectives and, thus, provide different non-dominated optimal choices of GMDH-type neural networks models for explosive cutting process. Moreover, all the three objectives are considered in a 3-objective optimization process, which consequently leads to some more non-dominated choices of GMDH-type models representing the trade-offs among the training error, prediction error, and number of neurons (complexity of network), simultaneously. The overlay graphs of these Pareto fronts also reveal that the 3-objective results include those of the 2-objective results and, thus, provide more optimal choices for the multi-objective design of GMDH-type neural networks in terms of minimum training error, minimum prediction error, and minimum complexity. (C) 2008 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available