4.7 Article

Information fusion as an integrative cross-cutting enabler to achieve robust, explainable, and trustworthy medical artificial intelligence

Journal

INFORMATION FUSION
Volume 79, Issue -, Pages 263-278

Publisher

ELSEVIER
DOI: 10.1016/j.inffus.2021.10.007

Keywords

Artificial intelligence; Information fusion; Medical AI; Explainable AI; Robustness; Explainability; Trust; Graph-based machine learning; Neural-symbolic learning and reasoning

Funding

  1. Austrian Science Fund (FWF) [P-32554]
  2. European Union's Horizon 2020 research and innovation program [826078, 965221]
  3. Spanish Government Juan de la Cierva Incorporacion [IJC2019-039152-I]
  4. DFF Sapere Aude research leader grant
  5. Basque Government through the ELKARTEK program (3KIA project) [KK-2020/00049]
  6. consolidated research group MATHMODE [T1294-19]
  7. German Federal Ministry of Education and Research [01IS18025 A, 01IS18037I, 0310L0207C]
  8. Ontario Research Fund [RDI 34876]
  9. Natural Sciences Research Council [NSERC 203475]
  10. CIHR Research Grant [93579]
  11. Canada Foundation for Innovation [CFI 29272, 225404, 33536]
  12. IBM
  13. Ian Lawson van Toch Fund
  14. Schroeder Arthritis Institute via the Toronto General and Western Hospital Foundation

Ask authors/readers for more resources

Medical artificial intelligence systems have achieved significant success and are crucial for improving human health. In order to enhance performance, addressing uncertainty and errors while explaining the result process is essential. Information fusion can help develop more robust and explainable machine learning models.
Medical artificial intelligence (AI) systems have been remarkably successful, even outperforming human performance at certain tasks. There is no doubt that AI is important to improve human health in many ways and will disrupt various medical workflows in the future. Using AI to solve problems in medicine beyond the lab, in routine environments, we need to do more than to just improve the performance of existing AI methods. Robust AI solutions must be able to cope with imprecision, missing and incorrect information, and explain both the result and the process of how it was obtained to a medical expert. Using conceptual knowledge as a guiding model of reality can help to develop more robust, explainable, and less biased machine learning models that can ideally learn from less data. Achieving these goals will require an orchestrated effort that combines three complementary Frontier Research Areas: (1) Complex Networks and their Inference, (2) Graph causal models and counterfactuals, and (3) Verification and Explainability methods. The goal of this paper is to describe these three areas from a unified view and to motivate how information fusion in a comprehensive and integrative manner can not only help bring these three areas together, but also have a transformative role by bridging the gap between research and practical applications in the context of future trustworthy medical AI. This makes it imperative to include ethical and legal aspects as a cross-cutting discipline, because all future solutions must not only be ethically responsible, but also legally compliant.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available