4.7 Article

Explanation in artificial intelligence: Insights from the social sciences

期刊

ARTIFICIAL INTELLIGENCE
卷 267, 期 -, 页码 1-38

出版社

ELSEVIER
DOI: 10.1016/j.artint.2018.07.007

关键词

Explanation; Explainability; Interpretability; Explainable Al; Transparency

资金

  1. Australian Research Council [DP160104083]
  2. Commonwealth of Australia Defence Science and Technology Group
  3. Defence Science Institute, an initiative of the State Government of Victoria

向作者/读者索取更多资源

There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to provide more transparency to their algorithms. Much of this research is focused on explicitly explaining decisions or actions to a human observer, and it should not be controversial to say that looking at how humans explain to each other can serve as a useful starting point for explanation in artificial intelligence. However, it is fair to say that most work in explainable artificial intelligence uses only the researchers' intuition of what constitutes a 'good' explanation. There exist vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people define, generate, select, evaluate, and present explanations, which argues that people employ certain cognitive biases and social expectations to the explanation process. This paper argues that the field of explainable artificial intelligence can build on this existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics. It draws out some important findings, and discusses ways that these can be infused with work on explainable artificial intelligence. (C) 2018 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据