Journal
JOURNAL OF EXPERIMENTAL & THEORETICAL ARTIFICIAL INTELLIGENCE
Volume 24, Issue 2, Pages 219-230Publisher
TAYLOR & FRANCIS LTD
DOI: 10.1080/0952813X.2011.639092
Keywords
naive Bayes; averaged one-dependence estimators; attribute weighting; classification; data mining
Categories
Funding
- National Natural Science Foundation of China [60905033, 61075063]
- Provincial Natural Science Foundation of Hubei [2011CDA103]
- Fundamental Research Funds for the Central Universities [CUG110405]
Ask authors/readers for more resources
Naive Bayes (NB) is a probability-based classification model which is based on the attribute independence assumption. However, in many real-world data mining applications, its attribute independence assumption is often violated. Responding to this fact, researchers have made a substantial amount of effort to improve the classification accuracy of NB by weakening its attribute independence assumption. For a recent example, averaged one-dependence estimators (AODE) is proposed, which weakens its attribute independence assumption by averaging all models from a restricted class of one-dependence classifiers. However, all one-dependence classifiers in AODE have same weights and are treated equally. According to our observation, different one-dependence classifiers should have different weights. Therefore, in this article, we proposed an improved model called weighted average of one-dependence estimators (WAODE) by assigning different weights to these one-dependence classifiers. In our WAODE, four different weighting approaches are designed and thus four different versions are created. For simplicity, we respectively denote them by WAODE-MI, WAODE-ACC, WAODE-CLL and WAODE-AUC. The experimental results on a large number of UCI datasets published on the main website of Weka platform show that our WAODE significantly outperform AODE.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available