Journal
EXPERT SYSTEMS WITH APPLICATIONS
Volume 213, Issue -, Pages -Publisher
PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.eswa.2022.118888
Keywords
Explainable AI; XAI; Explanations; Taxonomy; Artificial intelligence; Machine learning
Ask authors/readers for more resources
This paper presents a framework for defining different types of explanations of AI systems and criteria for evaluating their quality. It proposes a structural view of constructing explanations and suggests a typology based on the explanandum, explanantia, and their relationship. The paper highlights the importance of epistemological and psychological perspectives in defining quality criteria and aims to support clear inventories, verification criteria, and validation methods for AI explainability.
In this paper, we present a fundamental framework for defining different types of explanations of AI systems and the criteria for evaluating their quality. Starting from a structural view of how explanations can be constructed, i.e., in terms of an explanandum (what needs to be explained), multiple explanantia (explanations, clues, or parts of information that explain), and a relationship linking explanandum and explanantia, we propose an explanandum-based typology and point to other possible typologies based on how explanantia are presented and how they relate to explanandia. We also highlight two broad and complementary perspectives for defining possible quality criteria for assessing explainability: epistemological and psychological (cognitive). These definition attempts aim to support the three main functions that we believe should attract the interest and further research of XAI scholars: clear inventories, clear verification criteria, and clear validation methods.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available