4.7 Article

Uncertainty quantification in scientific machine learning: Methods, metrics, and comparisons

期刊

JOURNAL OF COMPUTATIONAL PHYSICS
卷 477, 期 -, 页码 -

出版社

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.jcp.2022.111902

关键词

Scientific machine learning; Stochastic partial differential equations; Uncertainty quantification; Physics-informed neural networks; Neural operator learning; Bayesian framework

向作者/读者索取更多资源

Neural networks have revolutionized computational methods by effectively solving challenging problems that cannot be solved using traditional methods. However, quantifying errors and uncertainties in neural network-based inference is more complex than in traditional methods. This study presents a comprehensive framework for effectively and efficiently quantifying total uncertainty in neural networks, including uncertainty modeling, solution methods, and evaluation metrics. Furthermore, an open-source Python library called NeuralUQ is developed to facilitate the deployment of uncertainty quantification in scientific machine learning research and practice.
Neural networks (NNs) are currently changing the computational paradigm on how to combine data with mathematical laws in physics and engineering in a profound way, tackling challenging inverse and ill-posed problems not solvable with traditional methods. However, quantifying errors and uncertainties in NN-based inference is more complicated than in traditional methods. This is because in addition to aleatoric uncertainty associated with noisy data, there is also uncertainty due to limited data, but also due to NN hyperparameters, overparametrization, optimization and sampling errors as well as model misspecification. Although there are some recent works on uncertainty quantification (UQ) in NNs, there is no systematic investigation of suitable methods towards quantifying the total uncertainty effectively and efficiently even for function approximation, and there is even less work on solving partial differential equations and learning operator mappings between infinite-dimensional function spaces using NNs. In this work, we present a comprehensive framework that includes uncertainty modeling, new and existing solution methods, as well as evaluation metrics and post-hoc improvement approaches. To demonstrate the applicability and reliability of our framework, we present an extensive comparative study in which various methods are tested on prototype problems, including problems with mixed input-output data, and stochastic problems in high dimensions. In the Appendix, we include a comprehensive description of all the UQ methods employed. Further, to help facilitate the deployment of UQ in Scientific Machine Learning research and practice, we present and develop in [1] an open-source Python library (github.com/Crunch-UQ4MI/neuraluq), termed NeuralUQ, that is accompanied by an educational tutorial and additional computational experiments.(c) 2022 Elsevier Inc. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据