4.7 Article

Fairness-Aware PAC Learning from Corrupted Data

期刊

出版社

MICROTOME PUBL

关键词

Fairness; robustness; data poisoning; trustworthy machine learning; PAC learning

向作者/读者索取更多资源

This work investigates fairness-aware learning under worst-case data manipulations and reveals that in certain situations, the learner can be forced to return an overly biased classifier, particularly when dealing with learning problems that have underrepresented protected groups in the data. Additionally, the study demonstrates that two learning algorithms that optimize for both accuracy and fairness achieve order-optimality in terms of corruption ratio and protected groups frequencies in the large data limit.
Addressing fairness concerns about machine learning models is a crucial step towards their long-term adoption in real-world automated systems. While many approaches have been developed for training fair models from data, little is known about the robustness of these methods to data corruption. In this work we consider fairness-aware learning under worst-case data manipulations. We show that an adversary can in some situations force any learner to return an overly biased classifier, regardless of the sample size and with or without degrading accuracy, and that the strength of the excess bias increases for learning problems with underrepresented protected groups in the data. We also prove that our hardness results are tight up to constant factors. To this end, we study two natural learning algorithms that optimize for both accuracy and fairness and show that these algorithms enjoy guarantees that are order-optimal in terms of the corruption ratio and the protected groups frequencies in the large data limit.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据