4.6 Article Proceedings Paper

Fast radial basis function interpolation via preconditioned Krylov iteration

Journal

SIAM JOURNAL ON SCIENTIFIC COMPUTING
Volume 29, Issue 5, Pages 1876-1899

Publisher

SIAM PUBLICATIONS
DOI: 10.1137/060662083

Keywords

radial basis function interpolation; preconditioned conjugate gradient; cardinal function preconditioner; computational geometry; fast multipole method

Ask authors/readers for more resources

We consider a preconditioned Krylov subspace iterative algorithm presented by Faul, Goodsell, and Powell (IMA J. Numer. Anal. 25 (2005), pp. 1 - 24) for computing the coefficients of a radial basis function interpolant over N data points. This preconditioned Krylov iteration has been demonstrated to be extremely robust to the distribution of the points and the iteration rapidly convergent. However, the iterative method has several steps whose computational and memory costs scale as O(N-2), both in preliminary computations that compute the preconditioner and in the matrix-vector product involved in each step of the iteration. We effectively accelerate the iterative method to achieve an overall cost of O(N logN). The matrix vector product is accelerated via the use of the fast multipole method. The preconditioner requires the computation of a set of closest points to each point. We develop an O(N logN) algorithm for this step as well. Results are presented for multiquadric interpolation in R-2 and biharmonic interpolation in R-3. A novel FMM algorithm for the evaluation of sums involving multiquadric functions in R-2 is presented as well.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available