4.5 Article

Balanced Gradient Training of Feed Forward Networks

Journal

NEURAL PROCESSING LETTERS
Volume 53, Issue 3, Pages 1823-1844

Publisher

SPRINGER
DOI: 10.1007/s11063-021-10474-1

Keywords

Back propagation; Vanishing gradient; Balanced gradient

Ask authors/readers for more resources

The study demonstrates the existence of infinitely many valid scaled gradients for neural network training, proposing a novel training method that outperforms conjugate gradient and Levenberg Marquardt, and is scalable for deep learning and big data. It achieves similar or lower testing error than the other two algorithms and requires fewer multiplies to reach the final network. Additionally, it performs better than conjugate gradient in convolutional neural networks.
We show that there are infinitely many valid scaled gradients which can be used to train a neural network. A novel training method is proposed that finds the best scaled gradients in each training iteration. The method's implementation uses first order derivatives which makes it scalable and suitable for deep learning and big data. In simulations, the proposed method has similar or less testing error than conjugate gradient and Levenberg Marquardt. The method reaches the final network utilizing fewer multiplies than the other two algorithms. It also works better than conjugate gradient in convolutional neural networks.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available