4.6 Article

Adaptive Sparse Approximations of Scattered Data

期刊

JOURNAL OF SCIENTIFIC COMPUTING
卷 90, 期 3, 页码 -

出版社

SPRINGER/PLENUM PUBLISHERS
DOI: 10.1007/s10915-021-01752-0

关键词

Scattered data; Adaptive sparse approximation; Radial basis function; Least squares; Parallel computing

资金

  1. National Natural Science Foundation of China [62003159]
  2. Natural Science Foundation of Jiangsu Province [BK20200332]
  3. National Science Foundation [CHE-1763198]

向作者/读者索取更多资源

This paper proposes two adaptive approximations for multivariate scattered data, namely sparse residual tree (SRT) and sparse residual forest (SRF). SRT provides sparse and stable approximations in areas with sufficient or redundant data, and identifies possible local regions with insufficient data; while SRF combines SRT predictors to enhance approximation accuracy and stability. The hierarchical parallel SRT algorithm is based on tree decomposition and adaptive radial basis function explorations, achieving convergence results with time complexity O(N log(2) N) and worst case storage requirement O(N log(2) N). Numerical experiments validate the effectiveness of the proposed methods.
In this paper, two adaptive approximations, i.e., sparse residual tree (SRT) and sparse residual forest (SRF), are proposed for multivariate scattered data. SRT not only leads to sparse and stable approximations in areas where the data is sufficient or redundant, but also points out the possible local regions where the data is inadequate. And SRF is a combination of SRT predictors to improve the approximation accuracy and stability according to the error characteristics of SRTs. The hierarchical parallel SRT algorithm is based on both tree decomposition and adaptive radial basis function (RBF) explorations, whereby for each child a sparse and proper RBF refinement is added to the approximation by minimizing the norm of the residual inherited from its parent. The convergence results are established for both SRTs and SRFs. The worst case time complexity of SRTs is O(N log(2) N) for the initial work and O(log(2) N) for each prediction, meanwhile, the worst case storage requirement is O(N log(2) N), where the N data points can be arbitrary distributed. Numerical experiments are performed for several illustrative examples.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据