4.7 Article

Is combining classifiers with stacking better than selecting the best one?

Journal

MACHINE LEARNING
Volume 54, Issue 3, Pages 255-273

Publisher

SPRINGER
DOI: 10.1023/b:mach.0000015881.36452.6e

Keywords

multi-response model trees; stacking; combining classifiers; ensembles of classifiers; meta-learning

Ask authors/readers for more resources

We empirically evaluate several state-of-the-art methods for constructing ensembles of heterogeneous classifiers with stacking and show that they perform (at best) comparably to selecting the best classifier from the ensemble by cross validation. Among state-of-the-art stacking methods, stacking with probability distributions and multi-response linear regression performs best. We propose two extensions of this method, one using an extended set of meta-level features and the other using multi-response model trees to learn at the meta-level. We show that the latter extension performs better than existing stacking approaches and better than selecting the best classifier by cross validation.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available