4.7 Article

A Two-channel model for relation extraction using multiple trained word embeddings

期刊

KNOWLEDGE-BASED SYSTEMS
卷 255, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.knosys.2022.109701

关键词

Relation extraction; Two-channel model; Trained word embedding

资金

  1. National Key Research and Development Program of China [2018AAA0101601]

向作者/读者索取更多资源

This paper proposes a Two-channel model for relation extraction, which utilizes multiple trained word embeddings to alleviate the polysemy problem. Additionally, a two-channel fusion method is introduced to improve performance. The experiment shows that the Two-channel model outperforms existing models and the fusion method achieves better results than concatenation or addition.
As an essential task in the field of knowledge graph, relation extraction (RE) has received extensive attention from researchers. Since the existing RE methods only adopt one trained word embedding to obtain sentence representation, the polysemy problem cannot be well solved. In order to alleviate the polysemy in RE, this paper proposes a Two-channel model by adopting multiple trained word embeddings, in which one channel is a bidirectional long-short-term memory network based on an attention mechanism (Bi-LSTM-ATT), and the other channel is a convolutional neural network (CNN). Furthermore, a two-channel fusion method is proposed based on this model to deal with polysemy problem in RE. As a result, the Two-channel model achieves 85.42% and 62.2% F1-scores on the Semeval-2010 Task 8 dataset and KBP37 dataset, respectively. The experiment results show that the Two-channel model performs better than most existing models under the condition without using the external features generated by natural language processing (NLP) tools. On the other hand, the two-channel fusion method also obtains a better performance than either concatenation or addition on the two channels.(c) 2022 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据