4.5 Article

Model selection for support vector machines via uniform design

期刊

COMPUTATIONAL STATISTICS & DATA ANALYSIS
卷 52, 期 1, 页码 335-346

出版社

ELSEVIER
DOI: 10.1016/j.csda.2007.02.013

关键词

discrepancy measure; Gaussian kernel; k-fold cross-validation; model selection; number-theoretic methods; quasi-monte Carlo; support vector machine; uniform design

向作者/读者索取更多资源

The problem of choosing a good parameter setting for a better generalization performance in a learning task is the so-called model selection. A nested uniform design (UD) methodology is proposed for efficient, robust and automatic model selection for support vector machines (SVMs). The proposed method is applied to select the candidate set of parameter combinations and carry out a k-fold cross-validation to evaluate the generalization performance of each parameter combination. In contrast to conventional exhaustive grid search, this method can be treated as a deterministic analog of random search. It can dramatically cut down the number of parameter trials and also provide the flexibility to adjust the candidate set size under computational time constraint. The key theoretic advantage of the UD model selection over the grid search is that the UD points are ''far more uniform and ''far more space filling than lattice grid points. The better uniformity and space-filling phenomena make the UD selection scheme more efficient by avoiding wasteful function evaluations of close-by patterns. The proposed method is evaluated on different learning tasks, different data sets as well as different SVM algorithms. (c) 2007 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据