4.8 Article

Surrogate gradients for analog neuromorphic computing

Publisher

NATL ACAD SCIENCES
DOI: 10.1073/pnas.2109194119

Keywords

neuromorphic hardware; recurrent neural networks; spiking neural networks; surrogate gradients; self-calibration

Funding

  1. European Union's Horizon 2020 research and innovation programme [720270, 785907, 945539]
  2. Novartis Research Foundation

Ask authors/readers for more resources

This study demonstrates the applicability of surrogate gradient learning on analog neuromorphic hardware using an in the-loop approach. The results show that learning can self-correct for device mismatch and achieve competitive spiking network performance on vision and speech benchmarks. This work sets several benchmarks for low-energy spiking network processing on analog neuromorphic hardware and paves the way for future on-chip learning algorithms.
To rapidly process temporal information at a low metabolic cost, biological neurons integrate inputs as an analog sum, but communicate with spikes, binary events in time. Analog neuromorphic hardware uses the same principles to emulate spiking neural networks with exceptional energy efficiency. However, instantiating high-performing spiking networks on such hardware remains a significant challenge due to device mismatch and the lack of efficient training algorithms. Surrogate gradient learning has emerged as a promising training strategy for spiking networks, but its applicability for analog neuromorphic systems has not been demonstrated. Here, we demonstrate surrogate gradient learning on the BrainScaleS-2 analog neuromorphic system using an in the-loop approach. We show that learning self-corrects for device mismatch, resulting in competitive spiking network performance on both vision and speech benchmarks. Our networks display sparse spiking activity with, on average, less than one spike per hidden neuron and input, perform inference at rates of up to 85,000 frames per second, and consume less than 200 mW. In summary, our work sets several benchmarks for low-energy spiking network processing on analog neuromorphic hardware and paves the way for future on-chip learning algorithms.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available