4.6 Article

Empirical evaluation of the improved Rprop learning algorithms

Journal

NEUROCOMPUTING
Volume 50, Issue -, Pages 105-123

Publisher

ELSEVIER
DOI: 10.1016/S0925-2312(01)00700-7

Keywords

supervised learning; resilient backpropagation (Rprop); gradient-based optimization

Ask authors/readers for more resources

The Rprop algorithm proposed by Riedmiller and Braun is one of the best performing first-order learning methods for neural networks. We discuss modifications of this algorithm that improve its learning speed. The new optimization methods are empirically compared to the existing Rprop variants, the conjugate gradient method, Quickprop, and the BFGS algorithm on a set of neural network benchmark problems. The improved Rprop outperforms the other methods; only the BFGS performs better in the later stages of learning on some of the test problems. For the analysis of the local search behavior, we compare the Rprop algorithms on general hyperparabolic error landscapes, where the new variants confirm their improvement. (C) 2002 Elsevier Science B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available