4.5 Article

Coalitional Strategies for Efficient Individual Prediction Explanation

期刊

INFORMATION SYSTEMS FRONTIERS
卷 24, 期 1, 页码 49-75

出版社

SPRINGER
DOI: 10.1007/s10796-021-10141-9

关键词

Data analysis; Machine learning; Interpretability; Explainable Artificial Intelligence (XAI); Prediction explanation

向作者/读者索取更多资源

With the increasing application of machine learning in various fields, there is a growing demand for understanding the internal operations of models. This paper proposes a method based on attribute coalition detection, which proves to be more efficient than existing methods, reducing computation time while maintaining acceptable accuracy.
As Machine Learning (ML) is now widely applied in many domains, in both research and industry, an understanding of what is happening inside the black box is becoming a growing demand, especially by non-experts of these models. Several approaches had thus been developed to provide clear insights of a model prediction for a particular observation but at the cost of long computation time or restrictive hypothesis that does not fully take into account interaction between attributes. This paper provides methods based on the detection of relevant groups of attributes -named coalitions- influencing a prediction and compares them with the literature. Our results show that these coalitional methods are more efficient than existing ones such as SHapley Additive exPlanation (SHAP). Computation time is shortened while preserving an acceptable accuracy of individual prediction explanations. Therefore, this enables wider practical use of explanation methods to increase trust between developed ML models, end-users, and whoever impacted by any decision where these models played a role.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据