4.5 Article

Empirical comparison of cross-validation and internal metrics for tuning SVM hyperparameters

Journal

PATTERN RECOGNITION LETTERS
Volume 88, Issue -, Pages 6-11

Publisher

ELSEVIER
DOI: 10.1016/j.patrec.2017.01.007

Keywords

SVM; Internal metrics; Cross validation; Hyper-parameter tuning; Model selection

Ask authors/readers for more resources

Hyperparameter tuning is a mandatory step for building a support vector machine classifier. In this work, we study some methods based on metrics of the training set itself, and not the performance of the classifier on a different test set - the usual cross-validation approach. We compare cross-validation (5-fold) with Xi-alpha, radius-margin bound, generalized approximate cross validation, maximum discrepancy and distance between two classes on 110 public binary data sets. Cross validation is the method that resulted in the best selection of the hyper-parameters, but it is also the method with one of the highest execution time. Distance between two classes (DBTC) is the fastest and the second best ranked method. We discuss that DBTC is a reasonable alternative to cross validation when training/hyperparameter-selection times are an issue and that the loss in accuracy when using DBTC is reasonably small. (C) 2017 Published by Elsevier B.V.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available