4.6 Article

Training Spiking Neural Networks for Reinforcement Learning Tasks With Temporal Coding Method

期刊

FRONTIERS IN NEUROSCIENCE
卷 16, 期 -, 页码 -

出版社

FRONTIERS MEDIA SA
DOI: 10.3389/fnins.2022.877701

关键词

spiking neural networks; reinforcement learning; temporal coding; fully differentiable; asynchronous processing

资金

  1. National Natural Science Foundation of China
  2. [62002369]

向作者/读者索取更多资源

In recent years, there has been a growing demand for using spiking neural networks (SNNs) to implement artificial intelligent systems. A new temporal coding method has been proposed to train SNNs while preserving their asynchronous nature. This method, combined with self-incremental variables and an encoding method, enables SNNs to achieve comparable performance in reinforcement learning tasks as state-of-the-art artificial neural networks.
Recent years witness an increasing demand for using spiking neural networks (SNNs) to implement artificial intelligent systems. There is a demand of combining SNNs with reinforcement learning architectures to find an effective training method. Recently, temporal coding method has been proposed to train spiking neural networks while preserving the asynchronous nature of spiking neurons to preserve the asynchronous nature of SNNs. We propose a training method that enables temporal coding method in RL tasks. To tackle the problem of high sparsity of spikes, we introduce a self-incremental variable to push each spiking neuron to fire, which makes SNNs fully differentiable. In addition, an encoding method is proposed to solve the problem of information loss of temporal-coded inputs. The experimental results show that the SNNs trained by our proposed method can achieve comparable performance of the state-of-the-art artificial neural networks in benchmark tasks of reinforcement learning.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据