4.4 Article

Interpretability in healthcare: A comparative study of local machine learning interpretability techniques

Journal

COMPUTATIONAL INTELLIGENCE
Volume 37, Issue 4, Pages 1633-1650

Publisher

WILEY
DOI: 10.1111/coin.12410

Keywords

big data; data science; interpretability; machine learning

Funding

  1. European Regional Development Fund [MOBJD341, MOBTT75]

Ask authors/readers for more resources

In the healthcare domain, the trust and explainability of complex machine learning models are highlighted, with the purpose of interpretability techniques being to provide insights into the prediction process and explain the generated results.
Although complex machine learning models (eg, random forest, neural networks) are commonly outperforming the traditional and simple interpretable models (eg, linear regression, decision tree), in the healthcare domain, clinicians find it hard to understand and trust these complex models due to the lack of intuition and explanation of their predictions. With the new general data protection regulation (GDPR), the importance for plausibility and verifiability of the predictions made by machine learning models has become essential. Hence, interpretability techniques for machine learning models are an area focus of research. In general, the main aim of these interpretability techniques is to shed light and provide insights into the prediction process of the machine learning models and to be able to explain how the results from the prediction was generated. A major problem in this context is that both the quality of the interpretability techniques and trust of the machine learning model predictions are challenging to measure. In this article, we propose four fundamental quantitative measures for assessing the quality of interpretability techniques-similarity, bias detection, execution time, and trust. We present a comprehensive experimental evaluation of six recent and popular local model agnostic interpretability techniques, namely, LIME, SHAP, Anchors, LORE, ILIME and MAPLE on different types of real-world healthcare data. Building on previous work, our experimental evaluation covers different aspects for its comparison including identity, stability, separability, similarity, execution time, bias detection, and trust. The results of our experiments show that MAPLE achieves the highest performance for the identity across all data sets included in this study, while LIME achieves the lowest performance for the identity metric. LIME achieves the highest performance for the separability metric across all data sets. On average, SHAP has the smallest average time to output explanation across all data sets included in this study. For detecting the bias, SHAP and MAPLE enable the participants to better detect the bias. For the trust metric, Anchors achieves the highest performance on all data sets included in this work.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.4
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available