4.5 Article

Balanced Gradient Training of Feed Forward Networks

期刊

NEURAL PROCESSING LETTERS
卷 53, 期 3, 页码 1823-1844

出版社

SPRINGER
DOI: 10.1007/s11063-021-10474-1

关键词

Back propagation; Vanishing gradient; Balanced gradient

向作者/读者索取更多资源

The study demonstrates the existence of infinitely many valid scaled gradients for neural network training, proposing a novel training method that outperforms conjugate gradient and Levenberg Marquardt, and is scalable for deep learning and big data. It achieves similar or lower testing error than the other two algorithms and requires fewer multiplies to reach the final network. Additionally, it performs better than conjugate gradient in convolutional neural networks.
We show that there are infinitely many valid scaled gradients which can be used to train a neural network. A novel training method is proposed that finds the best scaled gradients in each training iteration. The method's implementation uses first order derivatives which makes it scalable and suitable for deep learning and big data. In simulations, the proposed method has similar or less testing error than conjugate gradient and Levenberg Marquardt. The method reaches the final network utilizing fewer multiplies than the other two algorithms. It also works better than conjugate gradient in convolutional neural networks.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据