4.7 Article

Knowledge graphs as tools for explainable machine learning: A survey

Journal

ARTIFICIAL INTELLIGENCE
Volume 302, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.artint.2021.103627

Keywords

Explainable systems; Knowledge graphs; Explanations; Symbolic AI; Subsymbolic AI; Neuro-symbolic integration; Explainable AI

Ask authors/readers for more resources

This paper provides an extensive overview of the utilization of knowledge graphs in Explainable Machine Learning, highlighting the potential for more meaningful and trustworthy explanations. The integration of knowledge graphs shows promise in enhancing the understandability, reactivity, and accuracy of machine learning systems, while also posing challenges in handling noise and extracting knowledge efficiently for future research.
This paper provides an extensive overview of the use of knowledge graphs in the context of Explainable Machine Learning. As of late, explainable AI has become a very active field of research by addressing the limitations of the latest machine learning solutions that often provide highly accurate, but hardly scrutable and interpretable decisions. An increasing interest has also been shown in the integration of Knowledge Representation techniques in Machine Learning applications, mostly motivated by the complementary strengths and weaknesses that could lead to a new generation of hybrid intelligent systems. Following this idea, we hypothesise that knowledge graphs, which naturally provide domain background knowledge in a machine-readable format, could be integrated in Explainable Machine Learning approaches to help them provide more meaningful, insightful and trustworthy explanations. Using a systematic literature review methodology we designed an analytical framework to explore the current landscape of Explainable Machine Learning. We focus particularly on the integration with structured knowledge at large scale, and use our framework to analyse a variety of Machine Learning domains, identifying the main characteristics of such knowledge-based, explainable systems from different perspectives. We then summarise the strengths of such hybrid systems, such as improved understandability, reactivity, and accuracy, as well as their limitations, e.g. in handling noise or extracting knowledge efficiently. We conclude by discussing a list of open challenges left for future research. (c) 2021 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available