4.5 Article

HardGBM: A Framework for Accurate and Hardware-Efficient Gradient Boosting Machines

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCAD.2022.3218509

Keywords

Ensemble methods; gradient boosting machine (GBM); hardware implementation; hardware-software (HW/SW) codesign; machine learning (ML); resource-and power-constrained

Ask authors/readers for more resources

The paper proposes a GBM reduction framework for the first time, which supports automatic hardware implementation of regression tree ensembles. Experimental results demonstrate that the method reduces area utilization and power consumption while maintaining performance compared to the original ensembles.
Gradient boosting machine (GBM) is a powerful and widely used type of ensemble machine learning methods, among which the most famous one is XGBoost (XGB). However, the cost of running large GBMs on hardware could become prohibitive given stringent resources. Ensemble reduction on boosting ensemble is intrinsically hard because member models are constructed in a sequential order, where the training targets for latter ones depend on the performance of the former ones. In this work, a GBM reduction framework is proposed for the first time to tackle the problem. For the first time, the framework supports automatic hardware implementation of regression tree ensembles. Experiments on 24 datasets from various applications demonstrate that our method reduces overall area utilization by 81.60% (80.64%) and power consumption by 21.15% (19.06%), while exceeding or successfully maintaining the performance level comparing with the original XGB (LightGBM) ensembles. In comparative experiments, to attain approximately the same accuracy level as our framework or XGB, deep learning-based solutions require >52.7x footprints, 6.0x power consumption, and 1.4x training time. Equipped with tunable parameters, the framework is expected to seek a Pareto optimal front considering hardware resource limitation, accuracy and stability, and computation (training) efficiency.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available