4.7 Article

Improving fairness of artificial intelligence algorithms in Privileged-Group Selection Bias data settings

Journal

EXPERT SYSTEMS WITH APPLICATIONS
Volume 185, Issue -, Pages -

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.eswa.2021.115667

Keywords

Algorithmic bias; Algorithmic fairness; Fairness-aware machine learning; Semi-supervised learning; Selection bias

Funding

  1. Koret foundation grant for Smart Cities and Digital Living 2030

Ask authors/readers for more resources

This paper examines the issue of fairness in AI algorithms in data settings, highlighting how Privileged Group Selection Bias can lead to high algorithmic bias, even when treated equally, and proposes methods to overcome this bias, showing significant improvements in fairness with minimal compromise in accuracy.
An increasing number of decisions regarding the daily lives of human beings are being controlled by artificial intelligence (AI) algorithms. Since they now touch on many aspects of our lives, it is crucial to develop AI algorithms that are not only accurate but also objective and fair. Recent studies have shown that algorithmic decision-making may be inherently prone to unfairness, even when there is no intention for it. In this paper, we study the fairness of AI algorithms in data settings for which unprivileged groups are extremely underrepresented compared to privileged groups. A typical domain which often presents such Privileged Group Selection Bias (PGSB) is AI-based hiring, which stems from an inherent lack of labeled information for rejected applicants. We first demonstrate that such a selection bias can lead to a high algorithmic bias, even if privileged and unprivileged groups are treated exactly the same. We then propose several methods to overcome this type of bias. In particular, we suggest three in-process and pre-process fairness mechanisms, combined with both supervised and semi-supervised learning algorithms. An extensive evaluation that was conducted using two real world datasets, reveals that the proposed methods are able to improve fairness considerably, with only a minimal compromise in accuracy. This is despite the limited information available for unprivileged groups and the inherent trade-off between fairness and accuracy.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available