3.8 Proceedings Paper

Towards Explainability for AI Fairness

Publisher

SPRINGER INTERNATIONAL PUBLISHING AG
DOI: 10.1007/978-3-031-04083-2_18

Keywords

Fairness; Explainable AI; Explainability; Machine learning

Funding

  1. Austrian Science Fund (FWF) [P-32554]

Ask authors/readers for more resources

This article provides an overview of the relationship between explanation and AI fairness, demonstrating that AI explanations help identify potential variables that drive unfair outcomes and influence human's fairness judgment. Ensuring the trustworthiness of AI decision-making from the perspective of explainability and fairness presents different challenges.
AI explainability is becoming indispensable to allow users to gain insights into the AI system's decision-making process. Meanwhile, fairness is another rising concern that algorithmic predictions may be misaligned to the designer's intent or social expectations such as discrimination to specific groups. In this work, we provide a state-of-the-art overview on the relations between explanation and AI fairness and especially the roles of explanation on human's fairness judgement. The investigations demonstrate that fair decision making requires extensive contextual understanding, and AI explanations help identify potential variables that are driving the unfair outcomes. It is found that different types of AI explanations affect human's fairness judgements differently. Some properties of features and social science theories need to be considered in making senses of fairness with explanations. Different challenges are identified to make responsible AI for trustworthy decision making from the perspective of explainability and fairness.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available