4.6 Article

Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines

期刊

FRONTIERS IN NEUROSCIENCE
卷 10, 期 -, 页码 -

出版社

FRONTIERS MEDIA SA
DOI: 10.3389/fnins.2016.00241

关键词

stochastic processes; spiking neural networks; synaptic plasticty; unsupervised learning; Hopfield networks; regularization; synaptic transmission

资金

  1. National Science Foundation (NSF) [CCF-1317373]
  2. Office of Naval Research (ONR MURI) [N00014-13-1-0205]
  3. Intel Corporation
  4. Direct For Computer & Info Scie & Enginr
  5. Division of Computing and Communication Foundations [1317407] Funding Source: National Science Foundation

向作者/读者索取更多资源

Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据