4.7 Article

A comparative analysis of training methods for artificial neural network rainfall-runoff models

Journal

APPLIED SOFT COMPUTING
Volume 6, Issue 3, Pages 295-306

Publisher

ELSEVIER
DOI: 10.1016/j.asoc.2005.02.002

Keywords

artificial neural networks; rainfall-runoff modelling; real-coded genetic algorithms; self-organizing maps; back-propagation training algorithm

Ask authors/readers for more resources

This paper compares various training methods available for training multi- layer perceptron ( MLP) type of artificial neural networks ( ANNs) for modelling the rainfall - runoff process. The training methods investigated include the popular back-propagation algorithm ( BPA), real- coded genetic algorithm ( RGA), and a self- organizing map ( SOM). A SOM was used to first classify the input - output space into different categories and then develop feed- forward MLP models for each category using BPA. The daily average rainfall and streamflow data derived from an existing catchment were employed to develop all ANN models investigated in this study. A wide variety of standard statistical performance evaluation measures were employed to evaluate the performances of various ANN models developed. The results obtained in this study indicate that the approach of first classifying the input - output space into different categories using SOM and then developing separate ANN models for different classes trained using BPA performs better than the approach of developing a single ANN rainfall - runoff model trained using BPA. The ANN rainfall - runoff model trained using RGA was able to provide a better generalization of the complex, dynamic, non- linear, and fragmented rainfall - runoff process in comparison with the other approaches investigated in this study. It has been found that the RGA trained ANN model significantly outperformed the ANN model trained using BPA, and was also able to overcome certain limitations of the ANN rainfall - runoff model trained using BPA reported by many researchers in the past. It is noted that the performances of various ANN models should to be evaluated using a wide variety of statistical performance indices rather than relying on a few global error statistics normally employed that are similar in nature to the global error minimized at the output layer of an ANN. (C) 2005 Elsevier B. V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available