Journal
NEURAL NETWORKS
Volume 135, Issue -, Pages 1-12Publisher
PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.neunet.2020.11.012
Keywords
Knowledge graph reasoning; Reinforcement learning; Graph self-attention; GRU
Funding
- European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie Grant [721321]
Ask authors/readers for more resources
The approach improves knowledge graph reasoning by introducing distance-aware rewards and graph self-attention mechanism to enhance the model's understanding of entity information and relations. Compared to previous methods, this approach eliminates the need for pre-training or fine-tuning, significantly simplifying the complexity of the problem.
Knowledge graph reasoning aims to find reasoning paths for relations over incomplete knowledge graphs (KG). Prior works may not take into account that the rewards for each position (vertex in the graph) may be different. We propose the distance-aware reward in the reinforcement learning framework to assign different rewards for different positions. We observe that KG embeddings are learned from independent triples and therefore cannot fully cover the information described in the local neighborhood. To this effect, we integrate a graph self-attention (GSA) mechanism to capture more comprehensive entity information from the neighboring entities and relations. To let the model remember the path, we incorporate the GSA mechanism with GRU to consider the memory of relations in the path. Our approach can train the agent in one-pass, thus eliminating the pre-training or finetuning process, which significantly reduces the problem complexity. Experimental results demonstrate the effectiveness of our method. We found that our model can mine more balanced paths for each relation. (c) 2020 Elsevier Ltd. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available