4.7 Article

Time to Update the Split-Sample Approach in Hydrological Model Calibration

期刊

WATER RESOURCES RESEARCH
卷 58, 期 3, 页码 -

出版社

AMER GEOPHYSICAL UNION
DOI: 10.1029/2021WR031523

关键词

-

资金

  1. Canada First Research Excellence Fund
  2. Integrated Modeling Program for Canada (IMPC)
  3. Natural Sciences and Engineering Research Council of Canada (NSERC)

向作者/读者索取更多资源

This study empirically assesses how different data splitting methods influence post-validation model testing period performance in hydrological modeling. The findings suggest that calibrating to older data and then validating models on newer data produces inferior model testing period performance, while calibrating to the full available data and skipping model validation is the most robust split-sample decision. The experimental findings remain consistent across different factors and strongly support revising the traditional split-sample approach in hydrological modeling.
Model calibration and validation are critical in hydrological model robustness assessment. Unfortunately, the commonly used split-sample test (SST) framework for data splitting requires modelers to make subjective decisions without clear guidelines. This large-sample SST assessment study empirically assesses how different data splitting methods influence post-validation model testing period performance, thereby identifying optimal data splitting methods under different conditions. This study investigates the performance of two lumped conceptual hydrological models calibrated and tested in 463 catchments across the United States using 50 different data splitting schemes. These schemes are established regarding the data availability, length and data recentness of continuous calibration sub-periods (CSPs). A full-period CSP is also included in the experiment, which skips model validation. The assessment approach is novel in multiple ways including how model building decisions are framed as a decision tree problem and viewing the model building process as a formal testing period classification problem, aiming to accurately predict model success/failure in the testing period. Results span different climate and catchment conditions across a 35-year period with available data, making conclusions quite generalizable. Calibrating to older data and then validating models on newer data produces inferior model testing period performance in every single analysis conducted and should be avoided. Calibrating to the full available data and skipping model validation entirely is the most robust split-sample decision. Experimental findings remain consistent no matter how model building factors (i.e., catchments, model types, data availability, and testing periods) are varied. Results strongly support revising the traditional split-sample approach in hydrological modeling.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据