Journal
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Volume 36, Issue 5, Pages 942-954Publisher
IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2013.159
Keywords
Boosting; decision tree; decision forest; ensemble; greedy algorithm
Funding
- NSF [IIS-1016061, DMS-1007527, IIS-1250985]
- Direct For Computer & Info Scie & Enginr
- Div Of Information & Intelligent Systems [1250985, 1016061] Funding Source: National Science Foundation
- Division Of Mathematical Sciences
- Direct For Mathematical & Physical Scien [1007527] Funding Source: National Science Foundation
Ask authors/readers for more resources
We consider the problem of learning a forest of nonlinear decision rules with general loss functions. The standard methods employ boosted decision trees such as Adaboost for exponential loss and Friedman's gradient boosting for general loss. In contrast to these traditional boosting algorithms that treat a tree learner as a black box, the method we propose directly learns decision forests via fully-corrective regularized greedy search using the underlying forest structure. Our method achieves higher accuracy and smaller models than gradient boosting on many of the datasets we have tested on.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available