4.6 Article

Fair Enough: Searching for Sufficient Measures of Fairness

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3585006

关键词

Software fairness; fairness metrics; clustering; theoretical analysis; empirical analysis

向作者/读者索取更多资源

Testing machine learning software for ethical bias is a pressing current concern. Research shows that many fairness metrics effectively measure the same thing. Through experiments, it is found that these metrics can be grouped and each group may predict different things. Therefore, to simplify the fairness testing problem, it is recommended to test one metric per group based on the desired fairness type.
Testing machine learning software for ethical bias has become a pressing current concern. In response, recent research has proposed a plethora of new fairness metrics, for example, the dozens of fairness metrics in the IBM AIF360 toolkit. This raises the question: How can any fairness tool satisfy such a diverse range of goals? While we cannot completely simplify the task of fairness testing, we can certainly reduce the problem. This article shows that many of those fairness metrics effectively measure the same thing. Based on experiments using seven real-world datasets, we find that (a) 26 classification metrics can be clustered into seven groups and (b) four dataset metrics can be clustered into three groups. Further, each reduced set may actually predict different things. Hence, it is no longer necessary (or even possible) to satisfy all fairness metrics. In summary, to simplify the fairness testing problem, we recommend the following steps: (1) determine what type of fairness is desirable (and we offer a handful of such types), then (2) lookup those types in our clusters, and then (3) just test for one item per cluster. For the purpose of reproducibility, our scripts and data are available at https://github.com/Repoanonymous/Fairness_Metrics.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据