4.1 Article

Using Explainable Machine Learning to Explore the Impact of Synoptic Reporting on Prostate Cancer

Journal

ALGORITHMS
Volume 15, Issue 2, Pages -

Publisher

MDPI
DOI: 10.3390/a15020049

Keywords

Cox Proportional Hazards (CPH); explainable AI; eXtreme Gradient Boosting (XGB); interpretability; oncology; prostatectomy; ranked survival; SHAP

Ask authors/readers for more resources

In this paper, the authors demonstrate the practical application of explainable machine learning (XML) in oncology, specifically in the study of the impact of synoptic reporting on prostate cancer patient survival. A comparison of two predictive models revealed that the XML model outperformed the traditional model, and the authors used SHAP values to explain the contribution of different features to the models' output.
Machine learning (ML) models have proven to be an attractive alternative to traditional statistical methods in oncology. However, they are often regarded as black boxes, hindering their adoption for answering real-life clinical questions. In this paper, we show a practical application of explainable machine learning (XML). Specifically, we explored the effect that synoptic reporting (SR; i.e., reports where data elements are presented as discrete data items) in Pathology has on the survival of a population of 14,878 Dutch prostate cancer patients. We compared the performance of a Cox Proportional Hazards model (CPH) against that of an eXtreme Gradient Boosting model (XGB) in predicting patient ranked survival. We found that the XGB model (c-index = 0.67) performed significantly better than the CPH (c-index = 0.58). Moreover, we used Shapley Additive Explanations (SHAP) values to generate a quantitative mathematical representation of how features-including usage of SR-contributed to the models' output. The XGB model in combination with SHAP visualizations revealed interesting interaction effects between SR and the rest of the most important features. These results hint that SR has a moderate positive impact on predicted patient survival. Moreover, adding an explainability layer to predictive ML models can open their black box, making them more accessible and easier to understand by the user. This can make XML-based techniques appealing alternatives to the classical methods used in oncological research and in health care in general.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.1
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available