4.6 Article

A Pipeline for the Implementation and Visualization of Explainable Machine Learning for Medical Imaging Using Radiomics Features

期刊

SENSORS
卷 22, 期 14, 页码 -

出版社

MDPI
DOI: 10.3390/s22145205

关键词

explainable machine learning; medical imaging; information visualization; radiomics

资金

  1. Grohne-Stapp Endowment from the University of Colorado Cancer Center

向作者/读者索取更多资源

Machine learning models have shown remarkable accuracy in predicting clinical factors from medical imaging, but their lack of interpretability has been a concern. Explainable machine learning methods, such as Shapley values, can help explain the behavior of these models and identify important predictors. Incorporating these methods into medical software can increase trust in machine learning predictions and assist physicians in making important decisions. This article presents a novel pipeline for explainable medical imaging using radiomics data and Shapley values, and demonstrates its application in predicting a genetic mutation from MRI data.
Machine learning (ML) models have been shown to predict the presence of clinical factors from medical imaging with remarkable accuracy. However, these complex models can be difficult to interpret and are often criticized as black boxes. Prediction models that provide no insight into how their predictions are obtained are difficult to trust for making important clinical decisions, such as medical diagnoses or treatment. Explainable machine learning (XML) methods, such as Shapley values, have made it possible to explain the behavior of ML algorithms and to identify which predictors contribute most to a prediction. Incorporating XML methods into medical software tools has the potential to increase trust in ML-powered predictions and aid physicians in making medical decisions. Specifically, in the field of medical imaging analysis the most used methods for explaining deep learning-based model predictions are saliency maps that highlight important areas of an image. However, they do not provide a straightforward interpretation of which qualities of an image area are important. Here, we describe a novel pipeline for XML imaging that uses radiomics data and Shapley values as tools to explain outcome predictions from complex prediction models built with medical imaging with well-defined predictors. We present a visualization of XML imaging results in a clinician-focused dashboard that can be generalized to various settings. We demonstrate the use of this workflow for developing and explaining a prediction model using MRI data from glioma patients to predict a genetic mutation.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据