Journal
INTERNATIONAL JOURNAL OF HIGH PERFORMANCE COMPUTING APPLICATIONS
Volume 36, Issue 2, Pages 153-166Publisher
SAGE PUBLICATIONS LTD
DOI: 10.1177/10943420211017188
Keywords
Linear algebra; preconditioning; GPUs; CUDA; MPI
Ask authors/readers for more resources
This work demonstrates the successful implementation of adaptive Factored Sparse Approximate Inverse (aFSAI) on a distributed memory computer with GPU accelerators, showing superior performance compared to traditional preconditioners in challenging linear algebra problems through extensive numerical experiments.
The solution of linear systems of equations is a central task in a number of scientific and engineering applications. In many cases the solution of linear systems may take most of the simulation time thus representing a major bottleneck in the further development of scientific and technical software. For large scale simulations, nowadays accounting for several millions or even billions of unknowns, it is quite common to resort to preconditioned iterative solvers for exploiting their low memory requirements and, at least potential, parallelism. Approximate inverses have been shown to be robust and effective preconditioners in various contexts. In this work, we show how adaptive Factored Sparse Approximate Inverse (aFSAI), characterized by a very high degree of parallelism, can be successfully implemented on a distributed memory computer equipped with GPU accelerators. Taking advantage of GPUs in adaptive FSAI set-up is not a trivial task, nevertheless we show through an extensive numerical experimentation how the proposed approach outperforms more traditional preconditioners and results in a close-to-ideal behavior in challenging linear algebra problems.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available