4.7 Article

Randomizing outputs to increase prediction accuracy

Journal

MACHINE LEARNING
Volume 40, Issue 3, Pages 229-242

Publisher

SPRINGER
DOI: 10.1023/A:1007682208299

Keywords

ensemble; randomization; output variability

Ask authors/readers for more resources

Bagging and boosting reduce error by changing both the inputs and outputs to form perturbed training sets, growing predictors on these perturbed training sets and combining them. An interesting question is whether it is possible to get comparable performance by perturbing the outputs alone. Two methods of randomizing outputs are experimented with. One is called output smearing and the other output flipping. Both are shown to consistently do better than bagging.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available