Journal
NEURAL PROCESSING LETTERS
Volume 32, Issue 1, Pages 31-44Publisher
SPRINGER
DOI: 10.1007/s11063-010-9141-1
Keywords
Differential evolution; Conjugate gradients; Neural network training; Lamarckian evolution; Hybridization
Categories
Ask authors/readers for more resources
The paper describes two schemes that follow the model of Lamarckian evolution and combine differential evolution (DE), which is a population-based stochastic global search method, with the local optimization algorithm of conjugate gradients (CG). In the first, each offspring is fine-tuned by CG before competing with their parents. In the other CG is used to improve both parents and offspring in a manner that is completely seamless for individuals that survive more than one generation. Experiments involved training weights of feed-forward neural networks to solve three synthetic and four real-life problems. In six out of seven cases the DE-CG hybrid, which preserves and uses information on each solution's local optimization process, outperformed two recent variants of DE.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available