4.7 Article

Towards One-Size-Fits-Many: Multi-Context Attention Network for Diversity of Entity Resolution Tasks

期刊

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TKDE.2021.3060790

关键词

Entity resolution; deep learning; multi context attention

资金

  1. National Natural Science Foundation of China [61702432, 61672455, T1251RES1913]
  2. Singapore Ministry of Education

向作者/读者索取更多资源

This paper examines the task of entity resolution from a broader perspective, expanding its input from textual records to other modalities and proposing a unified model to support various applications. By fully exploiting the semantic contexts of embedding vectors, an integrated multi-context attention framework is proposed. Extensive experiments verify the effectiveness and generality of the model.
Entity resolution (ER) identifies data instances referring to the same real-world entity and has received enormous research attention. In this paper, we examine the task of ER from a broader perspective, with its input extended from textual records, which are conventionally studied in the literature, to other modalities such as check-in sequences, GPS trajectories and surveillance video frames to generate new applications. Our goal in this paper is to design an effective model to uniformly support all these ER applications with different input formats. Technically, we fully exploit the semantic contexts of embedding vectors for the pair of input instances. In particular, we propose an integrated multi-context attention framework that takes into account self-attention, pair-attention and global-attention from three types of context. The idea can be further extended to incorporate attribute attention in order to support structured datasets. We conduct extensive experiments on a diverse class of entity resolutions tasks, including tasks on unstructured, structured and dirty textual records, check-in sequences, GPS trajectories and surveillance video frames. The experimental results verified the effectiveness and generality of our model. When compared with strong baselines in these applications, our model can achieve superior or comparative performance.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据