4.7 Article

Kernel methods are competitive for operator learning

期刊

JOURNAL OF COMPUTATIONAL PHYSICS
卷 496, 期 -, 页码 -

出版社

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.jcp.2023.112549

关键词

Operator learning; Optimal recovery; Kernel methods; Gaussian processes; Functional regression; Partial differential equations

向作者/读者索取更多资源

We present a general kernel-based framework for learning operators between Banach spaces. Our approach is competitive in terms of cost-accuracy trade-off and matches or beats the performance of popular neural net methods on most benchmarks. The framework inherits advantages from kernel methods, such as simplicity, interpretability, convergence guarantees, a priori error estimates, and Bayesian uncertainty quantification.
We present a general kernel-based framework for learning operators between Banach spaces along with a priori error analysis and comprehensive numerical comparisons with popular neural net (NN) approaches such as Deep Operator Networks (DeepONet) [46] and Fourier Neural Operator (FNO) [45]. We consider the setting where the input/output spaces of target operator g(dagger) : v -> nu are reproducing kernel Hilbert spaces (RKHS), the data comes in the form of partial observations phi(u(t)), phi(v(t)) of input/output functions v(t) = G(dagger)(u(t)) (t = 1,...,N), and the measurement operators phi : V -> R-n and phi: nu -> R-n are linear. Writing psi : R-n -> V and chi : R-n -> nu for the optimal recovery maps associated with phi and phi, we approximate G(dagger) with (G) over bar = chi circle(f) over bar circle phi where (f) over bar is an optimal recovery approximation of f(dagger) := phi circle G dagger circle psi : R-n -> R-m. We show that, even when using vanilla kernels (e.g., linear or Matern), our approach is competitive in terms of cost-accuracy trade-off and either matches or beats the performance of NN methods on a majority of benchmarks. Additionally, our framework offers several advantages inherited from kernel methods: simplicity, interpretability, convergence guarantees, a priori error estimates, and Bayesian uncertainty quantification. As such, it can serve as a natural benchmark for operator learning.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据