4.6 Article

Event-driven contrastive divergence for spiking neuromorphic systems

期刊

FRONTIERS IN NEUROSCIENCE
卷 7, 期 -, 页码 -

出版社

FRONTIERS MEDIA SA
DOI: 10.3389/fnins.2013.00272

关键词

synaptic plasticity; neuromorphic cognition; Markov chain monte carlo; recurrent neural network; generative model

资金

  1. National Science Foundation [NSF EFRI-1137279, CCF-1317560]
  2. Office of Naval Research [ONR MURI 14-13-1-0205]
  3. Swiss National Science Foundation [PA00P2_142058]
  4. Directorate For Engineering
  5. Emerging Frontiers & Multidisciplinary Activities [1137279] Funding Source: National Science Foundation
  6. Division of Computing and Communication Foundations
  7. Direct For Computer & Info Scie & Enginr [1317407] Funding Source: National Science Foundation
  8. Swiss National Science Foundation (SNF) [PA00P2_142058] Funding Source: Swiss National Science Foundation (SNF)

向作者/读者索取更多资源

Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However, the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky l&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据