3.8 Proceedings Paper

Using Prior Knowledge to Guide BERT's Attention in Semantic Textual Matching Tasks

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3442381.3449988

关键词

Prior Knowledge; Semantic Textual Similarity; Deep Neural Networks; BERT

资金

  1. National Natural Science Foundation of China [61976102, U19A2065]
  2. Fundamental Research Funds for the Central Universities

向作者/读者索取更多资源

This study proposes a novel approach to enhance BERT's performance on semantic textual matching tasks by injecting prior knowledge directly into BERT's multi-head attention mechanism. Experimental results show that the knowledge-enhanced BERT consistently improves performance, especially when training data is scarce.
We study the problem of incorporating prior knowledge into a deep Transformer-based model, i.e., Bidirectional Encoder Representations from Transformers (BERT), to enhance its performance on semantic textual matching tasks. By probing and analyzing what BERT has already known when solving this task, we obtain better understanding of what task-specific knowledge BERT needs the most and where it is most needed. The analysis further motivates us to take a different approach than most existing works. Instead of using prior knowledge to create a new training task for fine-tuning BERT, we directly inject knowledge into BERT's multi-head attention mechanism. This leads us to a simple yet effective approach that enjoys fast training stage as it saves the model from training on additional data or tasks other than the main task. Extensive experiments demonstrate that the proposed knowledge-enhanced BERT is able to consistently improve semantic textual matching performance over the original BERT model, and the performance benefit is most salient when training data is scarce.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据