4.6 Article

Deterministic convergence of conjugate gradient method for feedforward neural networks

Journal

NEUROCOMPUTING
Volume 74, Issue 14-15, Pages 2368-2376

Publisher

ELSEVIER
DOI: 10.1016/j.neucom.2011.03.016

Keywords

Deterministic convergence; Conjugate gradient; Backpropagation; Feedforward neural networks

Funding

  1. National Natural Science Foundation of China [10871220]
  2. China Scholarship Council

Ask authors/readers for more resources

Conjugate gradient methods have many advantages in real numerical experiments, such as fast convergence and low memory requirements. This paper considers a class of conjugate gradient learning methods for backpropagation neural networks with three layers. We propose a new learning algorithm for almost cyclic learning of neural networks based on PRP conjugate gradient method. We then establish the deterministic convergence properties for three different learning modes, i.e., batch mode, cyclic and almost cyclic learning. The two deterministic convergence properties are weak and strong convergence that indicate that the gradient of the error function goes to zero and the weight sequence goes to a fixed point, respectively. It is shown that the deterministic convergence results are based on different learning modes and dependent on different selection strategies of learning rate. Illustrative numerical examples are given to support the theoretical analysis. (C) 2011 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available