4.7 Article

An incremental explanation of inference in Bayesian networks for increasing model trustworthiness and supporting clinical decision making

Journal

ARTIFICIAL INTELLIGENCE IN MEDICINE
Volume 103, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.artmed.2020.101812

Keywords

Bayesian networks; Explanation of reasoning; Trust; Decision making

Funding

  1. European Research Council (ERC) [ERC-2013-AdG339182-BAYES KNOWLEDGE]
  2. Engineering and Physical Sciences Research Council (EPSRC) [EP/P009964/1]
  3. Dept of Research & Clinical Innovation
  4. HQ Joint Medical Grp
  5. UK Defence Medical Services
  6. EPSRC [EP/P009964/1] Funding Source: UKRI

Ask authors/readers for more resources

Various AI models are increasingly being considered as part of clinical decision-support tools. However, the trustworthiness of such models is rarely considered. Clinicians are more likely to use a model if they can understand and trust its predictions. Key to this is if its underlying reasoning can be explained. A Bayesian network (BN) model has the advantage that it is not a black-box and its reasoning can be explained. In this paper, we propose an incremental explanation of inference that can be applied to 'hybrid' BNs, i.e. those that contain both discrete and continuous nodes. The key questions that we answer are: (1) which important evidence supports or contradicts the prediction, and (2) through which intermediate variables does the information flow. The explanation is illustrated using a real clinical case study. A small evaluation study is also conducted.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available