4.6 Article

Learning Without Feedback: Fixed Random Learning Signals Allow for Feedforward Training of Deep Neural Networks

期刊

FRONTIERS IN NEUROSCIENCE
卷 15, 期 -, 页码 -

出版社

FRONTIERS MEDIA SA
DOI: 10.3389/fnins.2021.629892

关键词

backpropagation; deep neural networks; weight transport; update locking; edge computing; biologically-plausible learning

资金

  1. National Foundation for Scientific Research (FNRS) of Belgium [1117116F-1117118F]

向作者/读者索取更多资源

The direct random target projection (DRTP) algorithm proposed in this work views one-hot-encoded labels in supervised classification problems as a proxy for error signs, enabling layerwise feedforward training of hidden layers to solve weight transport and update locking issues while reducing computational and memory requirements.
While the backpropagation of error algorithm enables deep neural network training, it implies (i) bidirectional synaptic weight transport and (ii) update locking until the forward and backward passes are completed. Not only do these constraints preclude biological plausibility, but they also hinder the development of low-cost adaptive smart sensors at the edge, as they severely constrain memory accesses and entail buffering overhead. In this work, we show that the one-hot-encoded labels provided in supervised classification problems, denoted as targets, can be viewed as a proxy for the error sign. Therefore, their fixed random projections enable a layerwise feedforward training of the hidden layers, thus solving the weight transport and update locking problems while relaxing the computational and memory requirements. Based on these observations, we propose the direct random target projection (DRTP) algorithm and demonstrate that it provides a tradeoff between accuracy and computational cost that is suitable for adaptive edge computing devices.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据