Journal
COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING
Volume 419, Issue -, Pages -Publisher
ELSEVIER SCIENCE SA
DOI: 10.1016/j.cma.2023.116591
Keywords
Differential equations; Numerical simulation; GPU; Data-parallelism; Computer kernel; HPC
Ask authors/readers for more resources
This article presents a high-performance vendor-agnostic method for massively parallel solving of ordinary and stochastic differential equations on GPUs. The method integrates with a popular differential equation solver library and achieves state-of-the-art performance compared to hand-optimized kernels.
We demonstrate a high-performance vendor-agnostic method for massively parallel solving of ensembles of ordinary differential equations (ODEs) and stochastic differential equations (SDEs) on GPUs. The method is integrated with a widely used differential equation solver library in a high-level language (Julia's DifferentialEquations.jl) and enables GPU acceleration without requiring code changes by the user. Our approach achieves state-of-the-art performance compared to hand-optimized CUDA-C++ kernels while performing 20-100x faster than the vectorizing map (vmap) approach implemented in JAX and PyTorch. Performance evaluation on NVIDIA, AMD, Intel, and Apple GPUs demonstrates performance portability and vendor agnosticism. We show composability with MPI to enable distributed multi-GPU workflows. The implemented solvers are fully featured - supporting event handling, automatic differentiation, and incorporation of datasets via the GPU's texture memory - allowing scientists to take advantage of GPU acceleration on all major current architectures without changing their model code and without loss of performance. We distribute the software as an open-source library, DiffEqGPU.jl.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available