期刊
KNOWLEDGE-BASED SYSTEMS
卷 263, 期 -, 页码 -出版社
ELSEVIER
DOI: 10.1016/j.knosys.2023.110282
关键词
Text matching; Word sense disambiguation; Knowledge distillation; Multi-task learning; Lexical knowledge
This study proposes a method that integrates external lexical knowledge to improve text matching by modeling the senses of potentially ambiguous words. A lightweight word sense disambiguation (WSD) model based on BERT and WordNet is designed and integrated into a matching mechanism. Experimental results on three matching-based tasks show that the sense knowledge-enhanced matching mechanism outperforms BERT-based baselines and other recent approaches.
This study proposes a method to improve text matching through integration of lexical knowledge from external resources to model the senses of potentially ambiguous words. Specifically, a sense-aware mechanism is designed wherein a word sense disambiguation (WSD) model is introduced into text matching and both tasks (WSD and matching) are simultaneously optimized via multi-task learning. The proposed WSD is a lightweight model that distills a pre-trained BERT-based model by leveraging the lexical knowledge obtained from WordNet. The sense information obtained from the WSD is integrated into matching explicitly and adaptively through the fusion of the learned sense representations with the word context representations generated from the baseline matching model. The effectiveness of the proposed approach is verified through extensive experiments with three distinct matching-based tasks: natural language inference, paraphrase identification, and answer selection. The results obtained for the respective datasets indicate that the proposed sense knowledge-enhanced matching mechanism outperforms several BERT-based baselines and other recent matching approaches.(c) 2023 Elsevier B.V. All rights reserved.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据