4.7 Article

Automated translation and accelerated solving of differential equations on multiple GPU platforms

出版社

ELSEVIER SCIENCE SA
DOI: 10.1016/j.cma.2023.116591

关键词

Differential equations; Numerical simulation; GPU; Data-parallelism; Computer kernel; HPC

向作者/读者索取更多资源

This article presents a high-performance vendor-agnostic method for massively parallel solving of ordinary and stochastic differential equations on GPUs. The method integrates with a popular differential equation solver library and achieves state-of-the-art performance compared to hand-optimized kernels.
We demonstrate a high-performance vendor-agnostic method for massively parallel solving of ensembles of ordinary differential equations (ODEs) and stochastic differential equations (SDEs) on GPUs. The method is integrated with a widely used differential equation solver library in a high-level language (Julia's DifferentialEquations.jl) and enables GPU acceleration without requiring code changes by the user. Our approach achieves state-of-the-art performance compared to hand-optimized CUDA-C++ kernels while performing 20-100x faster than the vectorizing map (vmap) approach implemented in JAX and PyTorch. Performance evaluation on NVIDIA, AMD, Intel, and Apple GPUs demonstrates performance portability and vendor agnosticism. We show composability with MPI to enable distributed multi-GPU workflows. The implemented solvers are fully featured - supporting event handling, automatic differentiation, and incorporation of datasets via the GPU's texture memory - allowing scientists to take advantage of GPU acceleration on all major current architectures without changing their model code and without loss of performance. We distribute the software as an open-source library, DiffEqGPU.jl.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据