4.7 Article

Rotation Forest for Data

期刊

INFORMATION FUSION
卷 74, 期 -, 页码 39-49

出版社

ELSEVIER
DOI: 10.1016/j.inffus.2021.03.007

关键词

Rotation Forest; Random Forest; Ensemble learning; Machine learning; Big Data; Spark

资金

  1. Ministerio de Economia y Competitividad of the Spanish Government [TIN2015-67534-P]
  2. Junta de Castilla y Leon, Spain [BU085P17, BU055P20]
  3. European Union FEDER funds
  4. Consejeria de Educacion of the Junta de Castilla y Leon, Spain
  5. European Social Fund [EDU/1100/2017]
  6. la Caixa'' Foundation, Spain [LCF/PR/PR18/51130007]
  7. Google Cloud

向作者/读者索取更多资源

The paper introduces a MapReduce Rotation Forest and its implementation under the Spark framework, addressing the issue of long training and prediction times in the original Rotation Forest in the context of Big Data. By parallelizing both PCA calculation and tree training, the proposed solution retains the performance of the original Rotation Forest while achieving competitive execution time.
The Rotation Forest classifier is a successful ensemble method for a wide variety of data mining applications. However, the way in which Rotation Forest transforms the feature space through PCA, although powerful, penalizes training and prediction times, making it unfeasible for Big Data. In this paper, a MapReduce Rotation Forest and its implementation under the Spark framework are presented. The proposed MapReduce Rotation Forest behaves in the same way as the standard Rotation Forest, training the base classifiers on a rotated space, but using a functional implementation of the rotation that enables its execution in Big Data frameworks. Experimental results are obtained using different cloud-based cluster configurations. Bayesian tests are used to validate the method against two ensembles for Big Data: Random Forest and PCARDE classifiers. Our proposal incorporates the parallelization of both the PCA calculation and the tree training, providing a scalable solution that retains the performance of the original Rotation Forest and achieves a competitive execution time (in average, at training, more than 3 times faster than other PCA-based alternatives). In addition, extensive experimentation shows that by setting some parameters of the classifier (i.e., bootstrap sample size, number of trees, and number of rotations), the execution time is reduced with no significant loss of performance using a small ensemble.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据