4.6 Article

Gradient-based feature-attribution explainability methods for spiking neural networks

期刊

FRONTIERS IN NEUROSCIENCE
卷 17, 期 -, 页码 -

出版社

FRONTIERS MEDIA SA
DOI: 10.3389/fnins.2023.1153999

关键词

spiking neural networks; explainable artificial intelligence; XAI; brain-inspired computation; neuromorphic; event-based encoding; MNIST; CIFAR

向作者/读者索取更多资源

This paper adapts gradient-based explainability methods originally developed for traditional ANNs to be used in SNNs processing event-based spiking data or real-valued data. The modified methods are evaluated on classification tasks and their accuracy is confirmed through perturbation experiments.
IntroductionSpiking neural networks (SNNs) are a model of computation that mimics the behavior of biological neurons. SNNs process event data (spikes) and operate more sparsely than artificial neural networks (ANNs), resulting in ultra-low latency and small power consumption. This paper aims to adapt and evaluate gradient-based explainability methods for SNNs, which were originally developed for conventional ANNs.MethodsThe adapted methods aim to create input feature attribution maps for SNNs trained through backpropagation that process either event-based spiking data or real-valued data. The methods address the limitations of existing work on explainability methods for SNNs, such as poor scalability, limited to convolutional layers, requiring the training of another model, and providing maps of activation values instead of true attribution scores. The adapted methods are evaluated on classification tasks for both real-valued and spiking data, and the accuracy of the proposed methods is confirmed through perturbation experiments at the pixel and spike levels.Results and discussionThe results reveal that gradient-based SNN attribution methods successfully identify highly contributing pixels and spikes with significantly less computation time than model-agnostic methods. Additionally, we observe that the chosen coding technique has a noticeable effect on the input features that will be most significant. These findings demonstrate the potential of gradient-based explainability methods for SNNs in improving our understanding of how these networks process information and contribute to the development of more efficient and accurate SNNs.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据