-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
1c2c081
commit 8b1b96c
Showing
1 changed file
with
36 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,36 @@ | ||
# $\delta$ parameter learning | ||
|
||
## Code Release | ||
|
||
- Codes are released at [this zenodo link](https://doi.org/10.5281/zenodo.5227738) | ||
|
||
|
||
## Summary | ||
|
||
From calibration to parameter learning: Harnessing the scaling effects of big data in geoscientific modeling for routing flows on the river network. | ||
|
||
The behaviors and skills of models in many geosciences (e.g., hydrology and ecosystem sciences) strongly depend on spatially-varying parameters that need calibration. | ||
A well-calibrated model can reasonably propagate information from observations to unobserved variables via model physics, but traditional calibration is highly inefficient | ||
and results in non-unique solutions. Here we propose a novel differentiable parameter learning (dPL) framework that efficiently learns a global mapping between inputs | ||
(and optionally responses) and parameters. Crucially, dPL exhibits beneficial scaling curves not previously demonstrated to geoscientists: as training data increases, | ||
dPL achieves better performance, more physical coherence, and better generalizability (across space and uncalibrated variables), all with orders-of-magnitude lower | ||
computational cost. We demonstrate examples that learned from soil moisture and streamflow, where dPL drastically outperformed existing evolutionary and regionalization | ||
methods, or required only ~12.5% of the training data to achieve similar performance. The generic scheme promotes the integration of deep learning and process-based models, | ||
without mandating reimplementation. | ||
|
||
## Bibtex Citation | ||
|
||
```bibtex | ||
@article{Tsai_2021, | ||
title={From calibration to parameter learning: Harnessing the scaling effects of big data in geoscientific modeling}, | ||
volume={12}, | ||
ISSN={2041-1723}, | ||
url={http://dx.doi.org/10.1038/s41467-021-26107-z}, | ||
DOI={10.1038/s41467-021-26107-z}, | ||
number={1}, | ||
journal={Nature Communications}, | ||
publisher={Springer Science and Business Media LLC}, | ||
author={Tsai, Wen-Ping and Feng, Dapeng and Pan, Ming and Beck, Hylke and Lawson, Kathryn and Yang, Yuan and Liu, Jiangtao and Shen, Chaopeng}, | ||
year={2021}, | ||
month=oct } | ||
``` |