4.3 Article

Honest leave-one-out cross-validation for estimating post-tuning generalization error

Related references

Note: Only part of the references are listed.
Article Statistics & Probability

Fast and Exact Leave-One-Out Analysis of Large-Margin Classifiers

Boxiang Wang et al.

Summary: Inspired by the Golub-Heath-Wahba formula for ridge regression, a new leave-one-out lemma is introduced for kernel SVM and related large-margin classifiers. An efficient algorithm named magicsvm is designed to train kernel SVM, compute exact leave-one-out cross-validation error, and improve computation speed compared to state-of-the-art SVM solvers. This method also enhances the computation speed of V-fold cross-validation for kernel classifiers.

TECHNOMETRICS (2022)

Article Statistics & Probability

Excess Optimism: How Biased is the Apparent Error of an Estimator Tuned by SURE?

Ryan J. Tibshirani et al.

JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION (2019)

Article Economics

Cross-validation for selecting a model selection procedure

Yongli Zhang et al.

JOURNAL OF ECONOMETRICS (2015)

Article Computer Science, Artificial Intelligence

Sensitivity Analysis of k-Fold Cross Validation in Prediction Error Estimation

Juan Diego Rodriguez et al.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2010)

Article Statistics & Probability

A BIAS CORRECTION FOR THE MINIMUM ERROR RATE IN CROSS-VALIDATION

Ryan J. Tibshirani et al.

ANNALS OF APPLIED STATISTICS (2009)

Article Biochemical Research Methods

Prediction error estimation: a comparison of resampling methods

AM Molinaro et al.

BIOINFORMATICS (2005)

Article Statistics & Probability

The estimation of prediction error: Covariance penalties and cross-validation

B Efron

JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION (2004)