期刊
NEUROCOMPUTING
卷 50, 期 -, 页码 105-123出版社
ELSEVIER
DOI: 10.1016/S0925-2312(01)00700-7
关键词
supervised learning; resilient backpropagation (Rprop); gradient-based optimization
The Rprop algorithm proposed by Riedmiller and Braun is one of the best performing first-order learning methods for neural networks. We discuss modifications of this algorithm that improve its learning speed. The new optimization methods are empirically compared to the existing Rprop variants, the conjugate gradient method, Quickprop, and the BFGS algorithm on a set of neural network benchmark problems. The improved Rprop outperforms the other methods; only the BFGS performs better in the later stages of learning on some of the test problems. For the analysis of the local search behavior, we compare the Rprop algorithms on general hyperparabolic error landscapes, where the new variants confirm their improvement. (C) 2002 Elsevier Science B.V. All rights reserved.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据