4.1 Article

Multiple predicting K-fold cross-validation for model selection

Journal

JOURNAL OF NONPARAMETRIC STATISTICS
Volume 30, Issue 1, Pages 197-215

Publisher

TAYLOR & FRANCIS LTD
DOI: 10.1080/10485252.2017.1404598

Keywords

Cross-validation; K-fold cross-validation; model selection; tuning parameter selection

Funding

  1. National Research Foundation of Korea [NRF-2017R1C1B5017431]
  2. Korea University [K1705711]

Ask authors/readers for more resources

K-fold cross-validation (CV) is widely adopted as a model selection criterion. In K-fold CV, (K - 1) folds are used for model construction and the hold-out fold is allocated to model validation. This implies model construction is more emphasised than the model validation procedure. However, some studies have revealed that more emphasis on the validation procedure may result in improved model selection. Specifically, leave-m-out CV with n samples may achieve variable-selection consistency when m/n approaches to 1. In this study, a new CV method is proposed within the framework of K-fold CV. The proposed method uses (K - 1) folds of the data for model validation, while the other fold is for model construction. This provides (K - 1) predicted values for each observation. These values are averaged to produce a final predicted value. Then, the model selection based on the averaged predicted values can reduce variation in the assessment due to the averaging. The variable-selection consistency of the suggested method is established. Its advantage over K-fold CV with finite samples are examined under linear, non-linear, and high-dimensional models.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.1
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available