Journal
NEUROCOMPUTING
Volume 74, Issue 5, Pages 765-770Publisher
ELSEVIER
DOI: 10.1016/j.neucom.2010.10.005
Keywords
Feedforward neural network; Online gradient method; Penalty; Momentum; Boundedness; Convergence
Categories
Funding
- Foundation of China University of Petroleum [Y080820]
Ask authors/readers for more resources
In this paper, the deterministic convergence of an online gradient method with penalty and momentum is investigated for training two-layer feedforward neural networks. The monotonicity of the new error function with the penalty term in the training iteration is firstly proved. Under this conclusion, we show that the weights are uniformly bounded during the training process and the algorithm is deterministically convergent. Sufficient conditions are also provided for both weak and strong convergence results. (C) 2010 Elsevier B.V. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available