3.8 Proceedings Paper

DLIME-Graphs: A DLIME Extension Based on Triple Embedding for Graphs

期刊

出版社

SPRINGER INTERNATIONAL PUBLISHING AG
DOI: 10.1007/978-3-031-21422-6_6

关键词

Triple embeddings; Knowledge graphs; Interpretable method; DLIME; SBERT; TSDAE

资金

  1. VLIR-UOS Network University Cooperation Programme-Cuba

向作者/读者索取更多资源

This research proposes an extended version of DLIME called DLIME-Graphs for explaining machine learning models on graphs. By reducing triple embeddings using UMAP and clustering with HDB-SCAN, DLIME-Graphs is able to provide explanations for 100% of the triples in the test dataset, enhancing the transparency and interpretability of the models.
In the last years, several research works have been proposed for the Knowledge Graph Completion task. However, like most Machine Learning models, most Knowledge Graph Completion models are opaque and lack interpretability. In order to achieve transparency, several interpretable and explainable models have been proposed. The Deterministic Local Interpretable Model-Agnostic Explanations (DLIME) was proposed to solve the lack of stability of the Local Interpretable Model-Agnostic Explanations (LIME), one of the most popular surrogate models. However, using DLIME to explain Machine Learning models in graphs becomes an issue due to its experiments being published only with tabular data. Therefore, this work aims to propose an interpretable method for graphs as an extension of DLIME named DLIME-Graphs. As a triple representation, DLIME-Graphs uses triple embeddings computed by SBERT which in turn, are reduced by the UMAP technique. Instead of using Hierarchical Clustering as DLIME, DLIME-Graphs uses HDB-SCAN to get clusters. To explain a test triple, DLIME-Graphs proposes to train two interpretable models: logistic regression and decision tree plus getting the most similar triples by a k-NN algorithm. The demonstration through a study case showed that DLIME-Graphs is able to give explanations for 100% of the triples in the test dataset through the former models offering transparency and interpretability.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据