4.3 Article

Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users' perceptions of fairness toward an algorithmic system

Journal

ETHICS AND INFORMATION TECHNOLOGY
Volume 24, Issue 1, Pages -

Publisher

SPRINGER
DOI: 10.1007/s10676-022-09623-4

Keywords

Fairness; Explainability; Algorithmic systems; Decision support systems; Users perception

Funding

  1. Cyprus Center for Algorithmic Transparency from the European Union [810105]
  2. University of Haifa, Israel
  3. Data Science Research Center (DSRC) at the University of Haifa, Israel

Ask authors/readers for more resources

Given the widespread use of algorithmic systems, it is important to explain their decision-making process and outcomes. Different explanation styles have varying impacts on users' fairness perception and understanding of the outcomes. Providing explanations enhances users' understanding, and certain explanation styles are more beneficial. Users' perception of fairness primarily depends on the system's outcomes.
In light of the widespread use of algorithmic (intelligent) systems across numerous domains, there is an increasing awareness about the need to explain their underlying decision-making process and resulting outcomes. Since oftentimes these systems are being considered as black boxes, adding explanations to their outcomes may contribute to the perception of their transparency and, as a result, increase users' trust and fairness perception towards the system, regardless of its actual fairness, which can be measured using various fairness tests and measurements. Different explanation styles may have a different impact on users' perception of fairness towards the system and on their understanding of the outcome of the system. Hence, there is a need to understand how various explanation styles may impact non-expert users' perceptions of fairness and understanding of the system's outcome. In this study we aimed at fulfilling this need. We performed a between-subject user study in order to examine the effect of various explanation styles on users' fairness perception and understanding of the outcome. In the experiment we examined four known styles of textual explanations (case-based, demographic-based, input influence-based and sensitivity-based) along with a new style (certification-based) that reflect the results of an auditing process of the system. The results suggest that providing some kind of explanation contributes to users' understanding of the outcome and that some explanation styles are more beneficial than others. Moreover, while explanations provided by the system are important and can indeed enhance users' perception of fairness, their perception mainly depends on the outcome of the system. The results may shed light on one of the main problems in explainability of algorithmic systems, which is choosing the best explanation to promote users' fairness perception towards a particular system, with respect to the outcome of the system. The contribution of this study is reflected in the new and realistic case study that was examined, in the creation and evaluation of a new explanation style that can be used as the link between the actual (computational) fairness of the system and users' fairness perception and in the need of analyzing and evaluating explanations while taking into account the outcome of the system.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.3
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available