4.6 Article

Optimization method based extreme learning machine for classification

期刊

NEUROCOMPUTING
卷 74, 期 1-3, 页码 155-163

出版社

ELSEVIER
DOI: 10.1016/j.neucom.2010.02.019

关键词

Extreme learning machine; Support vector machine; Support vector network; ELM kernel; ELM feature space; Equivalence between ELM and SVM; Maximal margin; Minimal norm of weights; Primal and dual ELM networks

资金

  1. Academic Research Fund (AcRF) Tier 1 [RG 22/08 (M52040128)]
  2. Chinese Scholarship Council (CSC) China

向作者/读者索取更多资源

Extreme learning machine (ELM) as an emergent technology has shown its good performance in regression applications as well as in large dataset (and/or multi-label) classification applications The ELM theory shows that the hidden nodes of the generalized single-hidden layer feedforward networks (SLFNs) which need not be neuron alike can be randomly generated and the universal approximation capability of such SLFNs can be guaranteed This paper further studies ELM for classification in the aspect of the standard optimization method and extends ELM to a specific type of generalized SLFNs support vector network. This paper shows that (1) under the ELM learning framework SVM s maximal margin property and the minimal norm of weights theory of feedforward neural networks are actually consistent (2) from the standard optimization method point of view ELM for classification and SVM are equivalent but ELM has less optimization constraints due to its special separability feature (3) as analyzed in theory and further verified by the simulation results ELM for classification tends to achieve better generalization performance than traditional SVM ELM for classification is less sensitive to user specified parameters and can be implemented easily (C) 2010 Elsevier B V All rights reserved

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据