4.7 Article

Fairness in Semi-Supervised Learning: Unlabeled Data Help to Reduce Discrimination

期刊

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TKDE.2020.3002567

关键词

Machine learning; Training; Semisupervised learning; Data models; Labeling; Machine learning algorithms; Measurement; Fairness; discrimination; machine learning; semi-supervised learning

资金

  1. Australian Research Council, Australia [DP190100981]
  2. NSF [III-1526499, III1763325, III-1909323, CNS-1930941]

向作者/读者索取更多资源

This paper explores the use of semi-supervised learning to address fairness issues in machine learning, including predicting labels for unlabeled data, resampling to obtain multiple fair datasets, and using ensemble learning to improve accuracy and reduce discrimination. Theoretical analysis and experiments demonstrate that this method achieves a better trade-off between accuracy and fairness.
A growing specter in the rise of machine learning is whether the decisions made by machine learning models are fair. While research is already underway to formalize a machine-learning concept of fairness and to design frameworks for building fair models with sacrifice in accuracy, most are geared toward either supervised or unsupervised learning. Yet two observations inspired us to wonder whether semi-supervised learning might be useful to solve discrimination problems. First, previous study showed that increasing the size of the training set may lead to a better trade-off between fairness and accuracy. Second, the most powerful models today require an enormous of data to train which, in practical terms, is likely possible from a combination of labeled and unlabeled data. Hence, in this paper, we present a framework of fair semi-supervised learning in the pre-processing phase, including pseudo labeling to predict labels for unlabeled data, a re-sampling method to obtain multiple fair datasets and lastly, ensemble learning to improve accuracy and decrease discrimination. A theoretical decomposition analysis of bias, variance and noise highlights the different sources of discrimination and the impact they have on fairness in semi-supervised learning. A set of experiments on real-world and synthetic datasets show that our method is able to use unlabeled data to achieve a better trade-off between accuracy and discrimination.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据