期刊
NEURAL PROCESSING LETTERS
卷 37, 期 3, 页码 377-392出版社
SPRINGER
DOI: 10.1007/s11063-012-9253-x
关键词
Principal component analysis; Extreme learning machine; Classification
资金
- Spanish Inter-Ministerial Commission of Science and Technology (MICYT) [TIN2011-22794]
- FEDER funds
- Junta de Andalucia (Spain) [P2011-TIC-7508]
- Doctoral Training on Softcomputing project
- Junta de Andalucia
- Ibero-American University Postgraduate Association (AUIP)
- Ministry of Higher Education of the Republic of Cuba
It is well-known that single-hidden-layer feedforward networks (SLFNs) with additive models are universal approximators. However the training of these models was slow until the birth of extreme learning machine (ELM) Huang et al. Neurocomputing 70(1-3):489-501 (2006) and its later improvements. Before ELM, the faster algorithms for efficiently training SLFNs were gradient based ones which need to be applied iteratively until a proper model is obtained. This slow convergence implies that SLFNs are not used as widely as they could be, even taking into consideration their overall good performances. The ELM allowed SLFNs to become a suitable option to classify a great number of patterns in a short time. Up to now, the hidden nodes were randomly initiated and tuned (though not in all approaches). This paper proposes a deterministic algorithm to initiate any hidden node with an additive activation function to be trained with ELM. Our algorithm uses the information retrieved from principal components analysis to fit the hidden nodes. This approach considerably decreases computational cost compared to later ELM improvements and overcomes their performance.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据