4.6 Article

A Pipeline for the Implementation and Visualization of Explainable Machine Learning for Medical Imaging Using Radiomics Features

Journal

SENSORS
Volume 22, Issue 14, Pages -

Publisher

MDPI
DOI: 10.3390/s22145205

Keywords

explainable machine learning; medical imaging; information visualization; radiomics

Funding

  1. Grohne-Stapp Endowment from the University of Colorado Cancer Center

Ask authors/readers for more resources

Machine learning models have shown remarkable accuracy in predicting clinical factors from medical imaging, but their lack of interpretability has been a concern. Explainable machine learning methods, such as Shapley values, can help explain the behavior of these models and identify important predictors. Incorporating these methods into medical software can increase trust in machine learning predictions and assist physicians in making important decisions. This article presents a novel pipeline for explainable medical imaging using radiomics data and Shapley values, and demonstrates its application in predicting a genetic mutation from MRI data.
Machine learning (ML) models have been shown to predict the presence of clinical factors from medical imaging with remarkable accuracy. However, these complex models can be difficult to interpret and are often criticized as black boxes. Prediction models that provide no insight into how their predictions are obtained are difficult to trust for making important clinical decisions, such as medical diagnoses or treatment. Explainable machine learning (XML) methods, such as Shapley values, have made it possible to explain the behavior of ML algorithms and to identify which predictors contribute most to a prediction. Incorporating XML methods into medical software tools has the potential to increase trust in ML-powered predictions and aid physicians in making medical decisions. Specifically, in the field of medical imaging analysis the most used methods for explaining deep learning-based model predictions are saliency maps that highlight important areas of an image. However, they do not provide a straightforward interpretation of which qualities of an image area are important. Here, we describe a novel pipeline for XML imaging that uses radiomics data and Shapley values as tools to explain outcome predictions from complex prediction models built with medical imaging with well-defined predictors. We present a visualization of XML imaging results in a clinician-focused dashboard that can be generalized to various settings. We demonstrate the use of this workflow for developing and explaining a prediction model using MRI data from glioma patients to predict a genetic mutation.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available