4.5 Article

Estimation of prediction error by using K-fold cross-validation

Journal

STATISTICS AND COMPUTING
Volume 21, Issue 2, Pages 137-146

Publisher

SPRINGER
DOI: 10.1007/s11222-009-9153-8

Keywords

Bias correction; K-fold cross-validation; Large dataset; Prediction error

Funding

  1. Ministry of Education, Culture, Sports, Science and Technology, Japan [20700260]
  2. Grants-in-Aid for Scientific Research [20700260] Funding Source: KAKEN

Ask authors/readers for more resources

Estimation of prediction accuracy is important when our aim is prediction. The training error is an easy estimate of prediction error, but it has a downward bias. On the other hand, K-fold cross-validation has an upward bias. The upward bias may be negligible in leave-one-out cross-validation, but it sometimes cannot be neglected in 5-fold or 10-fold cross-validation, which are favored from a computational standpoint. Since the training error has a downward bias and K-fold cross-validation has an upward bias, there will be an appropriate estimate in a family that connects the two estimates. In this paper, we investigate two families that connect the training error and K-fold cross-validation.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available