4.7 Article

Low-variance Forward Gradients using Direct Feedback Alignment and momentum

期刊

NEURAL NETWORKS
卷 169, 期 -, 页码 572-583

出版社

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.neunet.2023.10.051

关键词

Backpropagation; Low variance; Forward Gradient; Direct Feedback Alignment; Gradient estimates

向作者/读者索取更多资源

This paper proposes a new approach called the Forward Direct Feedback Alignment algorithm for supervised learning in deep neural networks. By combining activity-perturbed forward gradients, direct feedback alignment, and momentum, this method achieves better performance and convergence speed compared to other local alternatives to backpropagation.
Supervised learning in deep neural networks is commonly performed using error backpropagation. However, the sequential propagation of errors during the backward pass limits its scalability and applicability to low-powered neuromorphic hardware. Therefore, there is growing interest in finding local alternatives to backpropagation. Recently proposed methods based on forward-mode automatic differentiation suffer from high variance in large deep neural networks, which affects convergence. In this paper, we propose the Forward Direct Feedback Alignment algorithm that combines Activity-Perturbed Forward Gradients with Direct Feedback Alignment and momentum. We provide both theoretical proofs and empirical evidence that our proposed method achieves lower variance than forward gradient techniques. In this way, our approach enables faster convergence and better performance when compared to other local alternatives to backpropagation and opens a new perspective for the development of online learning algorithms compatible with neuromorphic systems.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据