4.6 Article

fPINNs: FRACTIONAL PHYSICS-INFORMED NEURAL NETWORKS

Journal

SIAM JOURNAL ON SCIENTIFIC COMPUTING
Volume 41, Issue 4, Pages A2603-A2626

Publisher

SIAM PUBLICATIONS
DOI: 10.1137/18M1229845

Keywords

physics-informed learning machines; fractional advection-diffusion; fractional inverse problem; parameter identification; numerical error analysis

Funding

  1. Army Research Office (ARO) [W911NF-18-1-0301]
  2. ARO MURI [W911NF-15-1-0562]
  3. Department of Energy [DESC0019434, DE-SC0019453]

Ask authors/readers for more resources

Physics-informed neural networks (PINNs), introduced in [M. Raissi, P. Perdikaris, and G. Karniadakis, T. Comput. Phys., 378 (2019), pp. 686-707], are effective in solving integer-order partial differential equations (PDEs) based on scattered and noisy data. PINNs employ standard feedforward neural networks (NNs) with the PDEs explicitly encoded into the NN using automatic differentiation, while the sum of the mean-squared PDE residuals and the mean-squared error in initial-boundary conditions is minimized with respect to the NN parameters. Here we extend PINNs to fractional PINNs (fPINNs) to solve space-time fractional advection-diffusion equations (fractional ADEs), and we study systematically their convergence, hence explaining both fPINNs and PINNs for the first time. Specifically, we demonstrate their accuracy and effectiveness in solving multidimensional forward and inverse problems with forcing terms whose values are only known at randomly scattered spatio-temporal coordinates (black-box (BB) forcing terms). A novel element of the fPINNs is the hybrid approach that we introduce for constructing the residual in the loss function using both automatic differentiation for the integer-order operators and numerical discretization for the fractional operators. This approach bypasses the difficulties stemming from the fact that automatic differentiation is not applicable to fractional operators because the standard chain rule in integer calculus is not valid in fractional calculus. To discretize the fractional operators, we employ the Griinwald-Letnikov (GL) formula in one-dimensional fractional ADEs and the vector GL formula in conjunction with the directional fractional Laplacian in two- and three-dimensional fractional ADEs. We first consider the one-dimensional fractional Poisson equation and compare the convergence of the fPINNs against the finite difference method (FDM). We present the solution convergence using both the mean L-2 error as well as the standard deviation due to sensitivity to NN parameter initializations. Using different GL formulas we observe first-, second-, and third-order convergence rates for small size training sets but the error saturates for larger training sets. We explain these results by analyzing the four sources of numerical errors due to discretization, sampling, NN approximation, and optimization. The total error decays monotonically (below 10(-5) for a third-order GL formula) but it saturates beyond that point due to the optimization error. We also analyze the relative balance between discretization and sampling errors and observe that the sampling size and the number of discretization points (auxiliary points) should be comparable to achieve the highest accuracy. As we increase the depth of the NN up to certain value, the mean error decreases and the standard deviation increases, whereas the width has essentially no effect unless its value is either too small or too large. We next consider time-dependent fractional ADEs and compare white-box (WB) and BB forcing. We observe that for the WB forcing, our results are similar to the aforementioned cases; however, for the BB forcing fPINNs outperform FDM. Subsequently, we consider multidimensional time-, space-, and space-time-fractional ADEs using the directional fractional Laplacian and we observe relative errors of 10(-3) similar to 10(-4). Finally, we solve several inverse problems in one, two, and three dimensions to identify the fractional orders, diffusion coefficients, and transport velocities and obtain accurate results given proper initializations even in the presence of significant noise.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available