4.7 Article

A novel measure for evaluating classifiers

Journal

EXPERT SYSTEMS WITH APPLICATIONS
Volume 37, Issue 5, Pages 3799-3809

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.eswa.2009.11.040

Keywords

Performance evaluation; Entropy; Accuracy; Classification

Funding

  1. Science Foundation of Jilin Province [20040529]
  2. National 863 High Technology Research and Development Program of China [2009AA01Z152]
  3. National Natural Science Foundation of China [60703013, 10978011]

Ask authors/readers for more resources

Evaluating classifier performances is a crucial problem in pattern recognition and machine learning. In this paper, we propose a new measure, i.e. confusion entropy, for evaluating classifiers. For each class cl(i) of an (N + 1)-class problem, the misclassification information involves both the information of how the samples with true class label cl(i) have been misclassified to the other N classes and the information of how the samples of the other N classes have been misclassified to class cl(i). The proposed measure exploits the class distribution information of such misclassifications of all classes. Both theoretical analysis and statistical experiments show the proposed measure is more precise than accuracy and RCI. Experimental results on some benchmark data sets further confirm the theoretical analysis and statistical results and show that the new measure is feasible for evaluating classifier performances. (C) 2009 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available