4.6 Article

Accelerating the XGBoost algorithm using GPU computing

Journal

PEERJ COMPUTER SCIENCE
Volume -, Issue -, Pages -

Publisher

PEERJ INC
DOI: 10.7717/peerj-cs.127

Keywords

Supervised machine learning; Gradient boosting; GPU computing

Funding

  1. Marsden Grant from the Royal Society of New Zealand [UOW1502]

Ask authors/readers for more resources

We present a CUDA-based implementation of a decision tree construction algorithm within the gradient boosting library XGBoost. The tree construction algorithm is executed entirely on the graphics processing unit (GPU) and shows high performance with a variety of datasets and settings, including sparse input matrices. Individual boosting iterations are parallelised, combining two approaches. An interleaved approach is used for shallow trees, switching to a more conventional radix sort-based approach for larger depths. We show speedups of between 3 x and 6 x using a Titan X compared to a 4 core i7 CPU, and 1.2 x using a Titan X compared to 2 x Xeon CPUs (24 cores). We show that it is possible to process the Higgs dataset (10 million instances, 28 features) entirely within GPU memory. The algorithm is made available as a plug-in within the XGBoost library and fully supports all XGBoost features including classification, regression and ranking tasks.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available