4.7 Article

Explainability of artificial intelligence methods, applications and challenges: A comprehensive survey

期刊

INFORMATION SCIENCES
卷 615, 期 -, 页码 238-292

出版社

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2022.10.013

关键词

Black-box; White-box; Explainable AI; Responsible AI; Machine learning; Deep learning

向作者/读者索取更多资源

The continuous advancement of Artificial Intelligence (AI) has revolutionized decision-making in various domains, but the lack of transparency and explainability in AI algorithms poses ethical challenges. Explainable Artificial Intelligence (XAI) aims to generate human-comprehensible explanations to reveal the internal workings of AI decisions. This study provides a taxonomy and evaluation of XAI research, discusses the advantages, limitations, and evaluation metrics of explanation generation techniques, and identifies future research directions and challenges.
The continuous advancement of Artificial Intelligence (AI) has been revolutionizing the strategy of decision-making in different life domains. Regardless of this achievement, AI algorithms have been built as Black-Boxes, that is as they hide their internal rationality and learning methodology from the human leaving many unanswered questions about how and why the AI decisions are made. The absence of explanation results in a sensible and ethical challenge. Explainable Artificial Intelligence (XAI) is an evolving subfield of AI that emphasizes developing a plethora of tools and techniques for unboxing the Black-Box AI solutions by generating human-comprehensible, insightful, and transparent explanations of AI decisions. This study begins by discussing the primary principles of XAI research, Black-Box problems, the targeted audience, and the related notion of explain -ability over the historical timeline of the XAI studies and accordingly establishes an inno-vative definition of explainability that addresses the earlier theoretical proposals. According to an extensive analysis of the literature, this study contributes to the body of knowledge by driving a fine-grained, multi-level, and multi-dimension taxonomy for insightful categorization of XAI studies with the main aim to shed light on the variations and commonalities of existing algorithms paving the way for extra methodological devel-opments. Then, an experimental comparative analysis is presented for the explanation gen-erated by common XAI algorithms applied to different categories of data to highlight their properties, advantages, and flaws. Followingly, this study discusses and categorizes the evaluation metrics for the XAI-generated explanation and the findings show that there is no common consensus on how an explanation must be expressed, and how its quality and dependability should be evaluated. The findings show that XAI can contribute to real-izing responsible and trustworthy AI, however, the advantages of interpretability should be technically demonstrated, and complementary procedures and regulations are required to give actionable information that can empower decision-making in real-world applications. Finally, the tutorial is crowned by discussing the open research questions, challenges, and future directions that serve as a roadmap for the AI community to advance the research in XAI and to inspire specialists and practitioners to take the advantage of XAI in different disciplines. (c) 2022 Elsevier Inc. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据