4.8 Article

Learning Nonlinear Functions Using Regularized Greedy Forest

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2013.159

Keywords

Boosting; decision tree; decision forest; ensemble; greedy algorithm

Funding

  1. NSF [IIS-1016061, DMS-1007527, IIS-1250985]
  2. Direct For Computer & Info Scie & Enginr
  3. Div Of Information & Intelligent Systems [1250985, 1016061] Funding Source: National Science Foundation
  4. Division Of Mathematical Sciences
  5. Direct For Mathematical & Physical Scien [1007527] Funding Source: National Science Foundation

Ask authors/readers for more resources

We consider the problem of learning a forest of nonlinear decision rules with general loss functions. The standard methods employ boosted decision trees such as Adaboost for exponential loss and Friedman's gradient boosting for general loss. In contrast to these traditional boosting algorithms that treat a tree learner as a black box, the method we propose directly learns decision forests via fully-corrective regularized greedy search using the underlying forest structure. Our method achieves higher accuracy and smaller models than gradient boosting on many of the datasets we have tested on.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available