4.6 Article

Multiple kernel extreme learning machine

期刊

NEUROCOMPUTING
卷 149, 期 -, 页码 253-264

出版社

ELSEVIER
DOI: 10.1016/j.neucom.2013.09.072

关键词

Extreme learning machine; Multiple kernel learning; Support vector machines

资金

  1. National Basic Research Program of China (973) [2014CB340303]
  2. National Natural Science Foundation of China [61403405, 60970034, 61170287, 61232016]

向作者/读者索取更多资源

Extreme learning machine (ELM) has been an important research topic over the last decade due to its high efficiency, easy-implementation, unification of classification and regression, and unification of binary and multi-class learning tasks. Though integrating these advantages. existing ELM algorithms pay little attention to optimizing the choice of kernels, which is indeed crucial to the performance of ELM in applications. More importantly, there is the lack of a general framework for ELM to integrate multiple heterogeneous data sources for classification. In this paper, we propose a general learning framework, termed multiple kernel extreme learning machines (MK-ELM), to address the above two issues. In the proposed MK-ELM, the optimal kernel combination weights and the structural parameters of ELM are jointly optimized. Following recent research on support vector machine (SVM) based MKL algorithms, we first design a sparse MK-ELM algorithm by imposing an l(1)-norm constraint on the kernel combination weights, and then extend it to a non-sparse scenario by substituting the l(1)-norm constraint with an l(p)-norm (p > 1) constraint. After that, a radius-incorporated MK-ELM algorithm which incorporates the radius of the minimum enclosing ball (MEB) is introduced. Three efficient optimization algorithms are proposed to solve the corresponding kernel learning problems. Comprehensive experiments have been conducted on Protein, Oxford Flower17, Caltech101 and Alzheimer's disease data sets to evaluate the performance of the proposed algorithms in terms of classification accuracy and computational efficiency. As the experimental results indicate, our proposed algorithms can achieve comparable or even better classification performance than state-of-the-art MKL algorithms, while incurring much less computational cost. (C) 2014 Elsevier ay. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据