4.1 Article

Improved Computation for Levenberg-Marquardt Training

Journal

IEEE TRANSACTIONS ON NEURAL NETWORKS
Volume 21, Issue 6, Pages 930-937

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNN.2010.2045657

Keywords

Levenberg-Marquardt (LM) algorithm; neural network training

Ask authors/readers for more resources

The improved computation presented in this paper is aimed to optimize the neural networks learning process using Levenberg-Marquardt (LM) algorithm. Quasi-Hessian matrix and gradient vector are computed directly, without Jacobian matrix multiplication and storage. The memory limitation problem for LM training is solved. Considering the symmetry of quasi-Hessian matrix, only elements in its upper/lower triangular array need to be calculated. Therefore, training speed is improved significantly, not only because of the smaller array stored in memory, but also the reduced operations in quasi-Hessian matrix calculation. The improved memory and time efficiencies are especially true for large sized patterns training.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.1
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available