4.7 Article

Explanation of machine learning models using shapley additive explanation and application for real data in hospital

Journal

Publisher

ELSEVIER IRELAND LTD
DOI: 10.1016/j.cmpb.2021.106584

Keywords

Shapley additive explanation; Machine learning; Interpretability; Feature importance; Feature packing

Funding

  1. JSPS KAKENHI [JP20K11938]

Ask authors/readers for more resources

This study used the SHAP method to interpret a gradient-boosting decision tree model, proposing new techniques for better interpretability. Experimental results on hospital cerebral infarction data showed consistency between SHAP and existing methods, highlighting the importance of A/G ratio in predicting cerebral infarction.
Background and Objective: When using machine learning techniques in decision-making processes, the interpretability of the models is important. In the present paper, we adopted the Shapley additive explanation (SHAP), which is based on fair profit allocation among many stakeholders depending on their contribution, for interpreting a gradient-boosting decision tree model using hospital data. Methods: For better interpretability, we propose two novel techniques as follows: (1) a new metric of feature importance using SHAP and (2) a technique termed feature packing, which packs multiple similar features into one grouped feature to allow an easier understanding of the model without reconstruction of the model. We then compared the explanation results between the SHAP framework and existing methods using cerebral infarction data from our hospital. Results: The interpretation by SHAP was mostly consistent with that by the existing methods. We showed how the A/G ratio works as an important prognostic factor for cerebral infarction using proposed techniques. Conclusion: Our techniques are useful for interpreting machine learning models and can uncover the underlying relationships between features and outcome. (C) 2021 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available