4.5 Article

Granular layEr Simulator: Design and Multi-GPU Simulation of the Cerebellar Granular Layer

期刊

出版社

FRONTIERS MEDIA SA
DOI: 10.3389/fncom.2021.630795

关键词

computational modeling; neuroscience; granular layer simulator; graphics processing unit; high performance computing; parallel processing

资金

  1. European Union [785907, 945539]

向作者/读者索取更多资源

In this work, authors described the development of a novel Granular layEr Simulator implemented on a multi-GPU system capable of reconstructing the cerebellar granular layer in a 3D space and reproducing its neuronal activity. The simulation was validated by reproducing typical behaviors well-documented in the literature. The use of GPU technology demonstrated significant speedups in large network simulations.
In modern computational modeling, neuroscientists need to reproduce long-lasting activity of large-scale networks, where neurons are described by highly complex mathematical models. These aspects strongly increase the computational load of the simulations, which can be efficiently performed by exploiting parallel systems to reduce the processing times. Graphics Processing Unit (GPU) devices meet this need providing on desktop High Performance Computing. In this work, authors describe a novel Granular layEr Simulator development implemented on a multi-GPU system capable of reconstructing the cerebellar granular layer in a 3D space and reproducing its neuronal activity. The reconstruction is characterized by a high level of novelty and realism considering axonal/dendritic field geometries, oriented in the 3D space, and following convergence/divergence rates provided in literature. Neurons are modeled using Hodgkin and Huxley representations. The network is validated by reproducing typical behaviors which are well-documented in the literature, such as the center-surround organization. The reconstruction of a network, whose volume is 600 x 150 x 1,200 mu m(3) with 432,000 granules, 972 Golgi cells, 32,399 glomeruli, and 4,051 mossy fibers, takes 235 s on an Intel i9 processor. The 10 s activity reproduction takes only 4.34 and 3.37 h exploiting a single and multi-GPU desktop system (with one or two NVIDIA RTX 2080 GPU, respectively). Moreover, the code takes only 3.52 and 2.44 h if run on one or two NVIDIA V100 GPU, respectively. The relevant speedups reached (up to similar to 38x in the single-GPU version, and similar to 55x in the multi-GPU) clearly demonstrate that the GPU technology is highly suitable for realistic large network simulations.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据