3.8 Article

A Strategy on Selecting Performance Metrics for Classifier Evaluation

出版社

IGI GLOBAL
DOI: 10.4018/IJMCMC.2014100102

关键词

Classifiers; Classifiers' Performances; Correlation; Machine Learning Community; Performance Metrics

资金

  1. Zhejiang Provincial Natural Science Foundation of China [LY15F020035, LY16F030012, LY15F030016]
  2. Ningbo Natural Science Foundation of China [2014A610066, 2011A610177, 2012A610018]
  3. Scientific Research Fund of Zhejiang Provincial Education Department [Y201534788]
  4. Jiangsu Province Natural Science Foundation of China [BK20150201]

向作者/读者索取更多资源

The evaluation of classifiers' performances plays a critical role in construction and selection of classification model. Although many performance metrics have been proposed in machine learning community, no general guidelines are available among practitioners regarding which metric to be selected for evaluating a classifier's performance. In this paper, we attempt to provide practitioners with a strategy on selecting performance metrics for classifier evaluation. Firstly, the authors investigate seven widely used performance metrics, namely classification accuracy, F-measure, kappa statistic, root mean square error, mean absolute error, the area under the receiver operating curve, and the area under the precision-recall curve. Secondly, the authors resort to using Pearson linear correlation and Spearman rank correlation to analyses the potential relationship among these seven metrics. Experimental results show that these commonly used metrics can be divided into three groups, and all metrics within a given group are highly correlated but less correlated with metrics from different groups.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据