4.8 Article

Combination of Classifiers With Optimal Weight Based on Evidential Reasoning

期刊

IEEE TRANSACTIONS ON FUZZY SYSTEMS
卷 26, 期 3, 页码 1217-1230

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TFUZZ.2017.2718483

关键词

Belief functions; classifier fusion; combination rule; Dempster-Shafer theory (DST); evidential reasoning (ER)

资金

  1. National Natural Science Foundation of China [61672431, 61403310]
  2. Fundamental Research Funds for the Central Universities, China [3102017zy020]

向作者/读者索取更多资源

In pattern classification problem, different classifiers learnt using different training data can provide more or less complementary knowledge, and the combination of classifiers is expected to improve the classification accuracy. Evidential reasoning (ER) provides an efficient framework to represent and combine the imprecise and uncertain informations. In this paper, we want to focus on the weighted combination of classifiers based on ER. Because each classifier may have different performance on the given dataset, the classifiers to combine are considered with different weights. A new weighted classifier combination method is proposed based on ER to enhance the classification accuracy. The optimal weighting factors of classifiers are obtained by minimizing the distances between fusion results obtained by Dempster's rule and the target output in training data space to fully take advantage of the complementarity of the classifiers. A confusion matrix is additionally introduced to characterize the probability of the object belonging to one class but classified to another class by the fusion result. This matrix is also optimized using training data jointly with classifier weight, and it is used to modify the fusion result to make it as close as possible to truth. Moreover, the training patterns are considered with different weights for the parameter optimization in classifier fusion, and the patterns hard to classify are committed with bigger weight than the ones easy to deal with. The pattern weight and the other parameters (i.e., classifier weight and confusion matrix) are iteratively optimized for obtaining the highest classification accuracy. A cautious decision making strategy is introduced to reduce the errors, and the pattern hard to classify will be cautiously committed to a set of classes, because the partial imprecision of decision is considered better than error in certain case. The effectiveness of the proposed method is demonstrated with various real datasets from UCI repository, and its performances are compared with those of other classical methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据