4.7 Article

GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks

期刊

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TKDE.2022.3187455

关键词

Graph neural networks; interpretability; explanation

向作者/读者索取更多资源

Recently, the effectiveness of graph neural networks (GNN) in representing graph structured data has been demonstrated. However, explaining GNN models is challenging due to their complex nonlinear transformations. In this paper, the authors propose GraphLIME, a local interpretable model explanation for graphs using the Hilbert-Schmidt Independence Criterion (HSIC) Lasso. GraphLIME is a generic framework that learns a nonlinear interpretable model locally in the subgraph of the explained node. Experimental results show that GraphLIME provides more descriptive and informative explanations compared to existing methods.
Recently, graph neural networks (GNN) were shown to be successful in effectively representing graph structured data because of their good performance and generalization ability. However, explaining the effectiveness of GNN models is a challenging task because of the complex nonlinear transformations made over the iterations. In this paper, we propose GraphLIME, a local interpretable model explanation for graphs using the Hilbert-Schmidt Independence Criterion (HSIC) Lasso, which is a nonlinear feature selection method. GraphLIME is a generic GNN-model explanation framework that learns a nonlinear interpretable model locally in the subgraph of the node being explained. Through experiments on two real-world datasets, the explanations of GraphLIME are found to be of extraordinary degree and more descriptive in comparison to the existing explanation methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据