4.5 Article

The grammar of interactive explanatory model analysis

Journal

DATA MINING AND KNOWLEDGE DISCOVERY
Volume -, Issue -, Pages -

Publisher

SPRINGER
DOI: 10.1007/s10618-023-00924-w

Keywords

Explainable AI; Model-agnostic explanation; Black-box model; Interactive explainability; Human-centered XAI

Ask authors/readers for more resources

The growing need for analyzing predictive models has led to the development of new methods for explaining their properties. However, it is not possible to adequately explain a black-box machine learning model using a single method. This leads to diverse interpretations and a lack of understanding, known as the Rashomon effect. Most methods focus on a single aspect of the model's behavior, but this paper proposes an interactive and sequential analysis that combines multiple Explanatory Model Analysis (EMA) methods. The study shows that this approach can improve the accuracy and confidence of human decision making.
The growing need for in-depth analysis of predictive models leads to a series of new methods for explaining their local and global properties. Which of these methods is the best? It turns out that this is an ill-posed question. One cannot sufficiently explain a black-box machine learning model using a single method that gives only one perspective. Isolated explanations are prone to misunderstanding, leading to wrong or simplistic reasoning. This problem is known as the Rashomon effect and refers to diverse, even contradictory, interpretations of the same phenomenon. Surprisingly, most methods developed for explainable and responsible machine learning focus on a single-aspect of the model behavior. In contrast, we showcase the problem of explainability as an interactive and sequential analysis of a model. This paper proposes how different Explanatory Model Analysis (EMA) methods complement each other and discusses why it is essential to juxtapose them. The introduced process of Interactive EMA (IEMA) derives from the algorithmic side of explainable machine learning and aims to embrace ideas developed in cognitive sciences. We formalize the grammar of IEMA to describe human-model interaction. It is implemented in a widely used human-centered open-source software framework that adopts interactivity, customizability and automation as its main traits. We conduct a user study to evaluate the usefulness of IEMA, which indicates that an interactive sequential analysis of a model may increase the accuracy and confidence of human decision making.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available