Journal
JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY
Volume 20, Issue 1, Pages 48-54Publisher
SCIENCE PRESS
DOI: 10.1007/s11390-005-0005-5
Keywords
bagging; data mining; ensemble learning; machine learning; Minkowsky distance; nearest neighbor; value difference metric
Ask authors/readers for more resources
It is well-known that in order to build a strong ensemble, the component learners should be with high diversity as well as high accuracy. If perturbing the training set can cause significant changes in the component learners constructed, then Bagging can effectively improve accuracy. However, for stable learners such as nearest neighbor classifiers, perturbing the training set can hardly produce diverse component learners, therefore Bagging does not work well. This paper adapts Bagging to nearest neighbor classifiers through injecting randomness to distance metrics. In constructing the component learners, both the training set and the distance metric employed for identifying the neighbors are perturbed. A large scale empirical study reported in this paper shows that the proposed BagInRand algorithm can effectively improve the accuracy of nearest neighbor classifiers.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available