4.6 Review

Causability and explainability of artificial intelligence in medicine

出版社

WILEY PERIODICALS, INC
DOI: 10.1002/widm.1312

关键词

artificial intelligence; causability; explainability; explainable AI; histopathology; medicine

资金

  1. FeatureCloud [826078]
  2. Hochschulraum-Infrastrukturmittelfonds
  3. MEFO
  4. Austrian Science Fund FWF [I2714-B31]
  5. EU under H2020 [765148]
  6. Austrian Science Fund (FWF) [I2714] Funding Source: Austrian Science Fund (FWF)

向作者/读者索取更多资源

Explainable artificial intelligence (AI) is attracting much interest in medicine. Technically, the problem of explainability is as old as AI itself and classic AI represented comprehensible retraceable approaches. However, their weakness was in dealing with uncertainties of the real world. Through the introduction of probabilistic learning, applications became increasingly successful, but increasingly opaque. Explainable AI deals with the implementation of transparency and traceability of statistical black-box machine learning methods, particularly deep learning (DL). We argue that there is a need to go beyond explainable AI. To reach a level of explainable medicine we need causability. In the same way that usability encompasses measurements for the quality of use, causability encompasses measurements for the quality of explanations. In this article, we provide some necessary definitions to discriminate between explainability and causability as well as a use-case of DL interpretation and of human explanation in histopathology. The main contribution of this article is the notion of causability, which is differentiated from explainability in that causability is a property of a person, while explainability is a property of a system This article is categorized under: Fundamental Concepts of Data and Knowledge > Human Centricity and User Interaction

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据