4.6 Review

Generating ensembles of heterogeneous classifiers using Stacked Generalization

出版社

WILEY PERIODICALS, INC
DOI: 10.1002/widm.1143

关键词

-

资金

  1. Spanish MICINN [TRA2010-20225-C03-01, TRA 2011-29454-C03-03]

向作者/读者索取更多资源

Over the last two decades, the machine learning and related communities have conducted numerous studies to improve the performance of a single classifier by combining several classifiers generated from one or more learning algorithms. Bagging and Boosting are the most representative examples of algorithms for generating homogeneous ensembles of classifiers. However, Stacking has become a commonly used technique for generating ensembles of heterogeneous classifiers since Wolpert presented his study entitled Stacked Generalization in 1992. Studies that have addressed the Stacking issue demonstrated that when selecting base learning algorithms for generating classifiers that are members of the ensemble, their learning parameters and the learning algorithm for generating the meta-classifier were critical issues. Most studies on this topic manually select the appropriate combination of base learning algorithms and their learning parameters. However, some other methods use automatic methods to determine good Stacking configurations instead of starting from these strong initial assumptions. In this paper, we describe Stacking and its variants and present several examples of application domains. WIREs Data Mining Knowl Discov 2015, 5:21-34. doi: 10.1002/widm.1143 For further resources related to this article, please visit the . Conflict of interest: The authors have declared no conflicts of interest for this article.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据