期刊
COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE
卷 214, 期 -, 页码 -出版社
ELSEVIER IRELAND LTD
DOI: 10.1016/j.cmpb.2021.106584
关键词
Shapley additive explanation; Machine learning; Interpretability; Feature importance; Feature packing
类别
资金
- JSPS KAKENHI [JP20K11938]
This study used the SHAP method to interpret a gradient-boosting decision tree model, proposing new techniques for better interpretability. Experimental results on hospital cerebral infarction data showed consistency between SHAP and existing methods, highlighting the importance of A/G ratio in predicting cerebral infarction.
Background and Objective: When using machine learning techniques in decision-making processes, the interpretability of the models is important. In the present paper, we adopted the Shapley additive explanation (SHAP), which is based on fair profit allocation among many stakeholders depending on their contribution, for interpreting a gradient-boosting decision tree model using hospital data. Methods: For better interpretability, we propose two novel techniques as follows: (1) a new metric of feature importance using SHAP and (2) a technique termed feature packing, which packs multiple similar features into one grouped feature to allow an easier understanding of the model without reconstruction of the model. We then compared the explanation results between the SHAP framework and existing methods using cerebral infarction data from our hospital. Results: The interpretation by SHAP was mostly consistent with that by the existing methods. We showed how the A/G ratio works as an important prognostic factor for cerebral infarction using proposed techniques. Conclusion: Our techniques are useful for interpreting machine learning models and can uncover the underlying relationships between features and outcome. (C) 2021 Elsevier B.V. All rights reserved.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据