4.5 Article

A Lamarckian Hybrid of Differential Evolution and Conjugate Gradients for Neural Network Training

期刊

NEURAL PROCESSING LETTERS
卷 32, 期 1, 页码 31-44

出版社

SPRINGER
DOI: 10.1007/s11063-010-9141-1

关键词

Differential evolution; Conjugate gradients; Neural network training; Lamarckian evolution; Hybridization

向作者/读者索取更多资源

The paper describes two schemes that follow the model of Lamarckian evolution and combine differential evolution (DE), which is a population-based stochastic global search method, with the local optimization algorithm of conjugate gradients (CG). In the first, each offspring is fine-tuned by CG before competing with their parents. In the other CG is used to improve both parents and offspring in a manner that is completely seamless for individuals that survive more than one generation. Experiments involved training weights of feed-forward neural networks to solve three synthetic and four real-life problems. In six out of seven cases the DE-CG hybrid, which preserves and uses information on each solution's local optimization process, outperformed two recent variants of DE.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据