4.4 Article

Targeted cross-validation

期刊

BERNOULLI
卷 29, 期 1, 页码 377-402

出版社

INT STATISTICAL INST
DOI: 10.3150/22-BEJ1461

关键词

Consistency; cross-validation; model selection; regression

向作者/读者索取更多资源

In many applications, we often have access to the complete dataset but are interested in predicting only a specific region based on the predictor variables. Instead of finding one universally superior modeling method, it is more practical to consider the region-specific interest and use a weighted L2 loss approach. We propose a targeted cross-validation (TCV) that selects models or procedures based on a general weighted L2 loss and demonstrates its consistency in selecting the best performing candidate.
In many applications, we have access to the complete dataset but are only interested in the prediction of a particular region of predictor variables. A standard approach is to find the globally best modeling method from a set of candidate methods. However, it is perhaps rare in reality that one candidate method is uniformly better than the others. A natural approach for this scenario is to apply a weighted L2 loss in performance assessment to reflect the region-specific interest. We propose a targeted cross-validation (TCV) to select models or procedures based on a general weighted L2 loss. We show that the TCV is consistent in selecting the best performing candidate under the weighted L2 loss. Experimental studies are used to demonstrate the use of TCV and its potential advantage over the global CV or the approach of using only local data for modeling a local region. Previous investigations on CV have relied on the condition that when the sample size is large enough, the rank-ing of two candidates stays the same. However, in many applications with the setup of changing data-generating processes or highly adaptive modeling methods, the relative performance of the methods is not static as the sample size varies. Even with a fixed data-generating process, it is possible that the ranking of two methods switches infinitely many times. In this work, we broaden the concept of the selection consistency by allowing the best can-didate to switch as the sample size varies, and then establish the consistency of the TCV. This flexible framework can be applied to high-dimensional and complex machine learning scenarios where the relative performances of modeling procedures are dynamic.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.4
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据