3.8 Proceedings Paper

Bias in Machine Learning Software: Why? How? What to Do?

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3468264.3468537

关键词

Software Fairness; Fairness Metrics; Bias Mitigation

资金

  1. NSF [1908762]
  2. LAS
  3. Division of Computing and Communication Foundations
  4. Direct For Computer & Info Scie & Enginr [1908762] Funding Source: National Science Foundation

向作者/读者索取更多资源

This study addresses bias in software decisions and proposes a new solution. By eliminating biased labels and rebalancing internal distributions, it reduces bias while increasing performance and maintaining fairness.
Increasingly, software is making autonomous decisions in case of criminal sentencing, approving credit cards, hiring employees, and so on. Some of these decisions show bias and adversely affect certain social groups (e.g. those defined by sex, race, age, marital status). Many prior works on bias mitigation take the following form: change the data or learners in multiple ways, then see if any of that improves fairness. Perhaps a better approach is to postulate root causes of bias and then applying some resolution strategy. This paper checks if the root causes of bias are the prior decisions about (a) what data was selected and (b) the labels assigned to those examples. Our Fair-SMOTE algorithm removes biased labels; and rebalances internal distributions so that, based on sensitive attribute, examples are equal in positive and negative classes. On testing, this method was just as effective at reducing bias as prior approaches. Further, models generated via Fair-SMOTE achieve higher performance (measured in terms of recall and F1) than other state-of-the-art fairness improvement algorithms. To the best of our knowledge, measured in terms of number of analyzed learners and datasets, this study is one of the largest studies on bias mitigation yet presented in the literature.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据