期刊
JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING
卷 163, 期 -, 页码 1-19出版社
ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.jpdc.2022.01.021
关键词
Computational neuroscience; Neural models; Neural networks; Leaky Integrate-and-Fire model; GPU processing
资金
- NVIDIA Corporation
- Greek Research and Technology Network [PA006, PR00711]
This study analyzes the performance issues of implementing the coupled Leaky Integrate-and-Fire model on a GPU, finding that the problem is mainly memory-bound. The results demonstrate that using advanced memory technology on a GPU can achieve better performance.
Understanding how neurons perform, when they are organized in interacting networks, is a key to understanding how the brain performs complex functions. Different models that approximate the behavior of interconnected neurons have been proposed in the literature. Implementing these models to simulate neuron behavior at an appropriately detailed level to observe collective phenomena is computationally intensive. In this study we analyze the coupled Leaky Integrate-and-Fire model and report on the issues that affect performance when the model is implemented on a GPU. We conclude that the problem is heavily memory-bound. Advances in memory technology at the hardware level seem to be the deciding factor to achieve better performance on the GPU. Our results show that using an NVidia K40 GPU a modest 2x speedup can be achieved compared to a parallel implementation running on a modern multi-core CPU. However, a substantial speedup of 11.1x can be achieved using an NVidia V100 GPU, mainly due to the improvements in its memory subsystem. (C) 2022 Elsevier Inc. All rights reserved.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据