Journal
NATURE COMMUNICATIONS
Volume 12, Issue 1, Pages -Publisher
NATURE PORTFOLIO
DOI: 10.1038/s41467-021-26107-z
Keywords
-
Categories
Funding
- Office of Biological and Environmental Research of the U.S. Department of Energy [DE-SC0016605]
- National Science Foundation [1832294, 1940190, 2018280]
- Division Of Earth Sciences
- Directorate For Geosciences [1832294] Funding Source: National Science Foundation
- Office of Advanced Cyberinfrastructure (OAC)
- Direct For Computer & Info Scie & Enginr [1940190] Funding Source: National Science Foundation
Ask authors/readers for more resources
This study introduces a novel differentiable parameter learning framework that efficiently learns a global mapping between inputs and parameters, demonstrating improved model performance and generalizability with significantly lower computational cost. Through examples in soil moisture and streamflow, the method outperforms existing approaches and requires only a fraction of the training data to achieve similar performance.
The behaviors and skills of models in many geosciences (e.g., hydrology and ecosystem sciences) strongly depend on spatially-varying parameters that need calibration. A well-calibrated model can reasonably propagate information from observations to unobserved variables via model physics, but traditional calibration is highly inefficient and results in non-unique solutions. Here we propose a novel differentiable parameter learning (dPL) framework that efficiently learns a global mapping between inputs (and optionally responses) and parameters. Crucially, dPL exhibits beneficial scaling curves not previously demonstrated to geoscientists: as training data increases, dPL achieves better performance, more physical coherence, and better generalizability (across space and uncalibrated variables), all with orders-of-magnitude lower computational cost. We demonstrate examples that learned from soil moisture and streamflow, where dPL drastically outperformed existing evolutionary and regionalization methods, or required only similar to 12.5% of the training data to achieve similar performance. The generic scheme promotes the integration of deep learning and process-based models, without mandating reimplementation.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available