4.5 Article

SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks

期刊

NEURAL COMPUTATION
卷 30, 期 6, 页码 1514-1541

出版社

MIT PRESS
DOI: 10.1162/neco_a_01086

关键词

-

资金

  1. SNSF (Swiss National Science Foundation)
  2. Wellcome Trust [110124/Z/15/Z]
  3. Burroughs Wellcome foundation
  4. Sloan foundation
  5. McKnight foundation
  6. Simons foundation
  7. James S. McDonnell foundation
  8. Office of Naval Research

向作者/读者索取更多资源

A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in silico. Here we revisit the problem of supervised learning in temporally coding multilayer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three-factor learning rule capable of training multilayer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric, and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike time patterns.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据