4.7 Article

Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI

期刊

INFORMATION FUSION
卷 71, 期 -, 页码 28-37

出版社

ELSEVIER
DOI: 10.1016/j.inffus.2021.01.008

关键词

Information fusion; Explainable AI; xAI; Graph Neural Networks; Multi-modal causability; Knowledge graphs; Counterfactuals

资金

  1. EU Project FeatureCloud
  2. European Union [826078]
  3. Austrian Science Fund (FWF) [P-32554]

向作者/读者索取更多资源

AI excels in certain tasks but humans excel at multi-modal thinking and building self-explanatory systems. The medical domain highlights the importance of various modalities contributing to one result. Using conceptual knowledge to guide model training can lead to more explainable, robust, and less biased machine learning models.
AI is remarkably successful and outperforms human experts in certain tasks, even in complex domains such as medicine. Humans on the other hand are experts at multi-modal thinking and can embed new inputs almost instantly into a conceptual knowledge space shaped by experience. In many fields the aim is to build systems capable of explaining themselves, engaging in interactive what-if questions. Such questions, called counterfactuals, are becoming important in the rising field of explainable AI (xAI). Our central hypothesis is that using conceptual knowledge as a guiding model of reality will help to train more explainable, more robust and less biased machine learning models, ideally able to learn from fewer data. One important aspect in the medical domain is that various modalities contribute to one single result. Our main question is ?How can we construct a multi-modal feature representation space (spanning images, text, genomics data) using knowledge bases as an initial connector for the development of novel explanation interface techniques???. In this paper we argue for using Graph Neural Networks as a method-of-choice, enabling information fusion for multi-modal causability (causability ? not to confuse with causality ? is the measurable extent to which an explanation to a human expert achieves a specified level of causal understanding). The aim of this paper is to motivate the international xAI community to further work into the fields of multi-modal embeddings and interactive explainability, to lay the foundations for effective future human?AI interfaces. We emphasize that Graph Neural Networks play a major role for multi-modal causability, since causal links between features can be defined directly using graph structures.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据