4.3 Article

Honest leave-one-out cross-validation for estimating post-tuning generalization error

期刊

STAT
卷 10, 期 1, 页码 -

出版社

WILEY
DOI: 10.1002/sta4.413

关键词

bootstrap; prediction; resampling methods; statistical learning

资金

  1. NSF [1915-842]

向作者/读者索取更多资源

This study focuses on estimating the generalization error of a CV-tuned predictive model and proposes the use of an honest leave-one-out cross-validation framework for an unbiased estimator. Demonstrations with kernel SVM and kernel logistic regression show competitive performance even against the state-of-the-art .632+ estimator.
Many machine learning models have tuning parameters to be determined by the training data, and cross-validation (CV) is perhaps the most commonly used method for selecting tuning parameters. This work concerns the problem of estimating the generalization error of a CV-tuned predictive model. We propose to use an honest leave-one-out cross-validation framework to produce a nearly unbiased estimator of the post-tuning generalization error. By using the kernel support vector machine and the kernel logistic regression as examples, we demonstrate that the honest leave-one-out cross-validation has very competitive performance even when competing with the state-of-the-art .632+ estimator.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.3
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据