4.7 Article

Why did AI get this one wrong? - Tree-based explanations of machine learning model predictions

Journal

ARTIFICIAL INTELLIGENCE IN MEDICINE
Volume 135, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.artmed.2022.102471

Keywords

XAI; Black-box; Explanation; Local explanation; Interpretable; Explainable; Fidelity; Reliability; Post-hoc; Model agnostic; Surrogate model

Ask authors/readers for more resources

The increasing complexity of machine learning models has led to the development of black-box models, which are accurate but difficult to interpret. This article introduces AraucanaXAI, a novel method that generates explanations for machine learning predictions. The method uses surrogate, locally-fitted classification and regression trees to provide post-hoc explanations with superior fidelity, ability to handle non-linear decision boundaries, and support for both classification and regression. The authors also provide an open-source implementation and evaluate its performance in medical AI applications, including cases of disagreement with expert opinions and limited data reliability.
Increasingly complex learning methods such as boosting, bagging and deep learning have made ML models more accurate, but harder to interpret and explain, culminating in black-box machine learning models. Model developers and users alike are often presented with a trade-off between performance and intelligibility, especially in high-stakes applications like medicine. In the present article we propose a novel methodological approach for generating explanations for the predictions of a generic machine learning model, given a specific instance for which the prediction has been made. The method, named AraucanaXAI, is based on surrogate, locally-fitted classification and regression trees that are used to provide post-hoc explanations of the prediction of a generic machine learning model. Advantages of the proposed XAI approach include superior fidelity to the original model, ability to deal with non-linear decision boundaries, and native support to both classification and regression problems. We provide a packaged, open-source implementation of the AraucanaXAI method and evaluate its behaviour in a number of different settings that are commonly encountered in medical applications of AI. These include potential disagreement between the model prediction and physician's expert opinion and low reliability of the prediction due to data scarcity.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available