4.7 Article

Levels of explainable artificial intelligence for human-aligned conversational explanations

Journal

ARTIFICIAL INTELLIGENCE
Volume 299, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.artint.2021.103525

Keywords

Explainable Artificial Intelligence (XAI); Broad-XAI; Interpretable Machine Learning (IML); Artificial General Intelligence (AGI); Human-Computer Interaction (HCI)

Ask authors/readers for more resources

In recent years, research into eXplainable Artificial Intelligence (XAI) and Interpretable Machine Learning (IML) has rapidly grown, driven by legislative changes, increased industry and government investments, and growing concerns from the public. While most explanations in these fields focus on low-level explanations of individual decisions based on specific data, factors such as beliefs, motivations, and interpretations of external cultural expectations are essential for people to accept and trust AI decision-making.
Over the last few years there has been rapid research growth into eXplainable Artificial Intelligence (XAI) and the closely aligned Interpretable Machine Learning (IML). Drivers for this growth include recent legislative changes and increased investments by industry and governments, along with increased concern from the general public. People are affected by autonomous decisions every day and the public need to understand the decision-making process to accept the outcomes. However, the vast majority of the applications of XAI/IML are focused on providing low-level 'narrow' explanations of how an individual decision was reached based on a particular datum. While important, these explanations rarely provide insights into an agent's: beliefs and motivations; hypotheses of other (human, animal or AI) agents' intentions; interpretation of external cultural expectations; or, processes used to generate its own explanation. Yet all of these factors, we propose, are essential to providing the explanatory depth that people require to accept and trust the AI's decision-making. This paper aims to define levels of explanation and describe how they can be integrated to create a human-aligned conversational explanation system. In so doing, this paper will survey current approaches and discuss the integration of different technologies to achieve these levels with Broad eXplainable Artificial Intelligence (Broad-XAI), and thereby move towards high-level 'strong' explanations. (C) 2021 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available