4.6 Article

Efficient Leave-One-Out Cross-Validation-based Regularized Extreme Learning Machine

期刊

NEUROCOMPUTING
卷 194, 期 -, 页码 260-270

出版社

ELSEVIER
DOI: 10.1016/j.neucom.2016.02.058

关键词

Extreme Learning Machine (ELM); Regularized ELM (RELM); Ridge regression; LOO-CV; Leave-One-Out Cross-Validation

向作者/读者索取更多资源

It is well known that the Leave-One-Out Cross-Validation (LOO-CV) is a highly reliable procedure in terms of model selection. Unfortunately, it is an extremely tedious method and has rarely been deployed in practical applications. In this paper, a highly efficient Leave-One-Out Cross-Validation (LOO-CV) formula has been developed and integrated with the popular Regularized Extreme Learning Machine (RELM). The main contribution of this paper is the proposed algorithm, termed as Efficient LOO-CV-based RELM (ELOO-RELM), that can effectively and efficiently update the LOO-CV error with every regularization parameter and automatically select the optimal model with limited user intervention. Rigorous analysis of computational complexity shows that the ELOO-RELM, including the tuning process, can achieve similar efficiency as the original RELM with pre-defined parameter, in which both scale linearly with the size of the training data. An early termination criterion is also introduced to further speed up the learning process. Experimentation studies on benchmark datasets show that the ELOO-RELM can achieve comparable generalization performance as the Support Vector Machines (SVM) with significantly higher learning efficiency. More importantly, comparing to the trial and error tuning procedure employed by the original RELM, the ELOO-RELM can provide more reliable results by the virtue of incorporating the LOO-CV procedure. (C) 2016 Published by Elsevier B.V.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据