3.8 Proceedings Paper

Fairea: A Model Behaviour Mutation Approach to Benchmarking Bias Mitigation Methods

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3468264.3468565

关键词

Software fairness; bias mitigation; model mutation

资金

  1. ERC [741278]

向作者/读者索取更多资源

The study introduces a new approach for evaluating the effectiveness of bias mitigation methods in machine learning, revealing that some of these methods have poor effectiveness.
The increasingly wide uptake of Machine Learning (ML) has raised the significance of the problem of tackling bias (i.e., unfairness), making it a primary software engineering concern. In this paper, we introduce Fairea, a model behaviour mutation approach to benchmarking ML bias mitigation methods. We also report on a large-scale empirical study to test the effectiveness of 12 widely-studied bias mitigation methods. Our results reveal that, surprisingly, bias mitigation methods have a poor effectiveness in 49% of the cases. In particular, 15% of the mitigation cases have worse fairness-accuracy trade-offs than the baseline established by Fairea; 34% of the cases have a decrease in accuracy and an increase in bias. Fairea has been made publicly available for software engineers and researchers to evaluate their bias mitigation methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据