4.7 Article

Minimally overfitted learners: A general framework for ensemble learning

Journal

KNOWLEDGE-BASED SYSTEMS
Volume 254, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.knosys.2022.109669

Keywords

Ensemble; Generative ensembles; Re-sampling; Bagging; Random forest

Funding

  1. Madrid Autonomous Community [IND2019/TIC-17194]
  2. Spanish Ministry of Economy and Competitiveness [RTI-2018-094269- B-I00]

Ask authors/readers for more resources

This study introduces a new ensemble framework called MOE, which effectively combines stable and unstable machine learning algorithms in constructing predictive models. By using resampling techniques and weighted random bootstrap sampling, the framework constructs slightly overfitted base learners, thereby improving the predictive ability.
The combination of Machine Learning (ML) algorithms is a solution for constructing stronger predictors than a single one. However, some approximations suggest that combining unstable algorithms provides better results than combining stable algorithms. For instance, Generative ensembles, based on re -sampling techniques, have demonstrated high performance by fusing the information of unstable base learners. Random Forest and Gradient Boosting are two well-known examples, both combining Decision Trees and providing better predictions than those obtained using a single tree. However, such successful results have not been achieved by assembling stable algorithms. This paper introduces the notion of limited learner and a new ensemble general framework called Minimally Overfitted Ensemble (MOE), a re-sampling-based ensemble approach that constructs slightly overfitted-based learners. The proposed framework works well with stable and unstable base algorithms, thanks to a Weighted RAndom Bootstrap (WRAB) sampling that provides the necessary diversity for the stable base algorithms. A hyperparameter analysis of the proposal is carried out on artificial data. Besides, its performance is evaluated on real datasets against well-known ML methods. The results confirm that the MOE framework works successfully using stable and unstable base algorithms, improving in most cases the predictive ability of single ML models and other ensemble methods. (C) 2022 The Author(s). Published by Elsevier B.V.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available