4.7 Article

Time to Update the Split-Sample Approach in Hydrological Model Calibration

Journal

WATER RESOURCES RESEARCH
Volume 58, Issue 3, Pages -

Publisher

AMER GEOPHYSICAL UNION
DOI: 10.1029/2021WR031523

Keywords

-

Funding

  1. Canada First Research Excellence Fund
  2. Integrated Modeling Program for Canada (IMPC)
  3. Natural Sciences and Engineering Research Council of Canada (NSERC)

Ask authors/readers for more resources

This study empirically assesses how different data splitting methods influence post-validation model testing period performance in hydrological modeling. The findings suggest that calibrating to older data and then validating models on newer data produces inferior model testing period performance, while calibrating to the full available data and skipping model validation is the most robust split-sample decision. The experimental findings remain consistent across different factors and strongly support revising the traditional split-sample approach in hydrological modeling.
Model calibration and validation are critical in hydrological model robustness assessment. Unfortunately, the commonly used split-sample test (SST) framework for data splitting requires modelers to make subjective decisions without clear guidelines. This large-sample SST assessment study empirically assesses how different data splitting methods influence post-validation model testing period performance, thereby identifying optimal data splitting methods under different conditions. This study investigates the performance of two lumped conceptual hydrological models calibrated and tested in 463 catchments across the United States using 50 different data splitting schemes. These schemes are established regarding the data availability, length and data recentness of continuous calibration sub-periods (CSPs). A full-period CSP is also included in the experiment, which skips model validation. The assessment approach is novel in multiple ways including how model building decisions are framed as a decision tree problem and viewing the model building process as a formal testing period classification problem, aiming to accurately predict model success/failure in the testing period. Results span different climate and catchment conditions across a 35-year period with available data, making conclusions quite generalizable. Calibrating to older data and then validating models on newer data produces inferior model testing period performance in every single analysis conducted and should be avoided. Calibrating to the full available data and skipping model validation entirely is the most robust split-sample decision. Experimental findings remain consistent no matter how model building factors (i.e., catchments, model types, data availability, and testing periods) are varied. Results strongly support revising the traditional split-sample approach in hydrological modeling.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available