3.8 Proceedings Paper

Event-based Video Reconstruction via Potential-assisted Spiking Neural Network

出版社

IEEE COMPUTER SOC
DOI: 10.1109/CVPR52688.2022.00358

关键词

-

资金

  1. National Natural Science Foundation of China [62027804, 61825101, 62088102]

向作者/读者索取更多资源

This paper introduces a new neuromorphic vision sensor that reports per-pixel brightness changes with high temporal resolution and dynamic range. The image reconstruction is achieved using a deep spiking neural network (SNN) and adaptive membrane potential (AMP) neuron. The proposed models achieve comparable performance to artificial neural network (ANN) models while being more computationally efficient in terms of energy consumption.
Neuromorphic vision sensor is a new bio-inspired imaging paradigm that reports asynchronous, continuously perpixel brightness changes called 'events' with high temporal resolution and high dynamic range. So far, the event-based image reconstruction methods are based on artificial neural networks (ANN) or hand-crafted spatiotemporal smoothing techniques. In this paper, we first implement the image reconstruction work via deep spiking neural network (SNN) architecture. As the bio-inspired neural networks, SNNs operating with asynchronous binary spikes distributed over time, can potentially lead to greater computational efficiency on event-driven hardware. We propose a novel Event-based Video reconstruction framework based on a fully Spiking Neural Network (EVSNN), which utilizes Leaky-Integrate-and-Fire (LIF) neuron and Membrane Potential (MP) neuron. We find that the spiking neurons have the potential to store useful temporal information (memory) to complete such time-dependent tasks. Further-more, to better utilize the temporal information, we propose a hybrid potential-assisted framework (PAEVSNN) using the membrane potential of spiking neuron. The proposed neuron is referred as Adaptive Membrane Potential (AMP) neuron, which adaptively updates the membrane potential according to the input spikes. The experimental results demonstrate that our models achieve comparable performance to ANN-based models on IJRR, MVSEC, and HQF datasets. The energy consumptions of EVSNN and PAEVSNN are 19.36x and 7.75x more computationally efficient than their ANN architectures, respectively. The code and pretrained model are available at https://sites.google.com/view/evsnn.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据