4.7 Article

A framework for falsifiable explanations of machine learning models with an application in computational pathology

Journal

MEDICAL IMAGE ANALYSIS
Volume 82, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.media.2022.102594

Keywords

Explainable artificial intelligence; U-Net; Tumor segmentation; Falsifiability

Funding

  1. Ministry of Culture and Science (MKW) of the State of North-Rhine West-phalia, Germany [111.08.03.05-133974]
  2. German Federal Ministry of Education and Research [031L0264]

Ask authors/readers for more resources

This paper introduces a hypothesis-based framework for falsifiable explanations of machine learning models, which connects the intermediate space of the model with the data samples to provide falsifiable explanations. The authors instantiate this framework in the field of computational pathology using hyperspectral infrared microscopy, and validate the explanations by histological staining.
In recent years, deep learning has been the key driver of breakthrough developments in computational pathology and other image based approaches that support medical diagnosis and treatment. The underlying neural networks as inherent black boxes lack transparency and are often accompanied by approaches to explain their output. However, formally defining explainability has been a notorious unsolved riddle. Here, we introduce a hypothesis-based framework for falsifiable explanations of machine learning models. A falsifiable explanation is a hypothesis that connects an intermediate space induced by the model with the sample from which the data originate. We instantiate this framework in a computational pathology setting using hyperspectral infrared microscopy. The intermediate space is an activation map, which is trained with an inductive bias to localize tumor. An explanation is constituted by hypothesizing that activation corresponds to tumor and associated structures, which we validate by histological staining as an independent secondary experiment.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available