4.7 Article

Multi-modal knowledge graphs representation learning via multi-headed self-attention

期刊

INFORMATION FUSION
卷 88, 期 -, 页码 78-85

出版社

ELSEVIER
DOI: 10.1016/j.inffus.2022.07.008

关键词

Multi-modal knowledge graphs; Representation learning; Multi-modal information fusion

资金

  1. National Natural Science Foundation of China [U1603262, 61562082]

向作者/读者索取更多资源

This study proposes a multi-modal knowledge graph representation learning method using multi-head self-attention, which improves the effectiveness of link prediction by adding rich multi-modal information to entities.
Traditional knowledge graphs (KG) representation learning focuses on the link information between entities, and the effectiveness of learning is influenced by the complexity of KGs. Considering a multi-modal knowledge graph (MKG), due to the introduction of considerable other modal information(such as images and texts), the complexity of KGs further increases, which degrades the effectiveness of representation learning. To resolve this solve the problem, this study proposed the multi-modal knowledge graphs representation learning via multi-head self-attention (MKGRL-MS) model, which improved the effectiveness of link prediction by adding rich multi-modal information to the entity. We first generated a single-modal feature vector corresponding to each entity. Then, we used multi-headed self-attention to obtain the attention degree of different modal features of entities in the process of semantic synthesis. In this manner, we learned the multi-modal feature representation of entities. New knowledge representation is the sum of traditional knowledge representation and an entity's multi-modal feature representation. Simultaneously, we successfully train our model on two existing models and two different datasets and verified its versatility and effectiveness on the link prediction task.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据