Journal
NEUROCOMPUTING
Volume 89, Issue -, Pages 141-146Publisher
ELSEVIER
DOI: 10.1016/j.neucom.2012.02.029
Keywords
Feedforward neural networks; Batch back-propagation algorithm; Penalty; Boundedness; Convergence
Categories
Funding
- National Natural Science Foundation of China [61101228, 10871220, 70971014]
- Fundamental Research Funds for the Central Universities of China
- Key Laboratory Project of Education Department of Liaoning Province [841092]
Ask authors/readers for more resources
This paper investigates the batch back-propagation algorithm with penalty for training feedforward neural networks. A usual penalty is considered, which is a term proportional to the norm of the weights. The learning rate is set to be a small constant or an adaptive series. The main contribution of this paper is to theoretically prove the boundedness of the weights in the network training process. This boundedness is then used to prove some convergence results of the algorithm, which cover both the weak and strong convergence. Simulation results are given to support the theoretical findings. (c) 2012 Elsevier B.V. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available