期刊
SCIENTIFIC REPORTS
卷 11, 期 1, 页码 -出版社
NATURE PORTFOLIO
DOI: 10.1038/s41598-021-98448-0
关键词
-
资金
- C-BRIC, one of six centers in JUMP
- Semiconductor Research Corporation (SRC) program - DARPA
- National Science Foundation [1947826]
Spiking Neural Networks (SNNs) offer an energy-efficient alternative to conventional deep learning by emulating biological features in the brain. The visual explanation tool SAM highlights neurons with short inter-spike interval activity to provide a transparent understanding for end-users on how SNNs make predictions and decisions without the use of gradients and ground truth. This approach opens up a new research area of 'explainable neuromorphic computing' that aims to establish trust in predictions from SNNs.
By emulating biological features in brain, Spiking Neural Networks (SNNs) offer an energy-efficient alternative to conventional deep learning. To make SNNs ubiquitous, a 'visual explanation' technique for analysing and explaining the internal spike behavior of such temporal deep SNNs is crucial. Explaining SNNs visually will make the network more transparent giving the end-user a tool to understand how SNNs make temporal predictions and why they make a certain decision. In this paper, we propose a bio-plausible visual explanation tool for SNNs, called Spike Activation Map (SAM). SAM yields a heatmap (i.e., localization map) corresponding to each time-step of input data by highlighting neurons with short inter-spike interval activity. Interestingly, without the use of gradients and ground truth, SAM produces a temporal localization map highlighting the region of interest in an image attributed to an SNN's prediction at each time-step. Overall, SAM outsets the beginning of a new research area 'explainable neuromorphic computing' that will ultimately allow end-users to establish appropriate trust in predictions from SNNs.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据