4.6 Article

The Matthews Correlation Coefficient (MCC) is More Informative Than Cohen's Kappa and Brier Score in Binary Classification Assessment

期刊

IEEE ACCESS
卷 9, 期 -, 页码 78368-78381

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2021.3084050

关键词

Correlation; Measurement; Machine learning; Standards; Calibration; Blogs; Task analysis; Matthews correlation coefficient; Cohen's Kappa; binary classification; confusion matrix; supervised machine learning; Brier score; confusion matrix; applied machine learning

向作者/读者索取更多资源

Measuring the outcome of binary classifications is crucial in machine learning and statistics, but there is no consensus on which statistical rate to use. The Matthews correlation coefficient (MCC) has advantages over other scores in terms of reliability on imbalanced datasets. Comparing MCC with Cohen's Kappa and Brier score, MCC provides more truthful and informative results in certain use cases. It is therefore recommended to use MCC for evaluating binary classifications.
Even if measuring the outcome of binary classifications is a pivotal task in machine learning and statistics, no consensus has been reached yet about which statistical rate to employ to this end. In the last century, the computer science and statistics communities have introduced several scores summing up the correctness of the predictions with respect to the ground truth values. Among these scores, the Matthews correlation coefficient (MCC) was shown to have several advantages over confusion entropy, accuracy, F-1 score, balanced accuracy, bookmaker informedness, markedness, and diagnostic odds ratio: MCC, in fact, produces a high score only if the majority of the predicted negative data instances and the majority of the positive data instances are correct, and therefore it results being very trustworthy on imbalanced datasets. In this study, we compare MCC with two other popular scores: Cohen's Kappa, a metric that originated in social sciences, and the Brier score, a strictly proper scoring function which emerged in weather forecasting studies. After explaining the mathematical properties and the relationships between MCC and each of these two rates, we report some use cases where these scores generate different values, which lead to discordant outcomes, where MCC provides a more truthful and informative result. We highlight the reasons why it is more advisable to use MCC rather that Cohen's Kappa and the Brier score to evaluate binary classifications.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据