4.5 Article

MRE: A Military Relation Extraction Model Based on BiGRU and Multi-Head Attention

期刊

SYMMETRY-BASEL
卷 13, 期 9, 页码 -

出版社

MDPI
DOI: 10.3390/sym13091742

关键词

military relation extraction; bi-directional encoder representations from transformers (BERT); BiGRU; multi-head attention

资金

  1. National Social Science Foundation of China [2019-SKJJ-C-083]

向作者/读者索取更多资源

A significant amount of operational information exists in textual form, and extracting this information from unstructured military text is crucial. Traditional methods have limitations such as inadequate manual features and inaccurate Chinese word segmentation in the military domain. The proposed approach combines BiGRU and MHATT to improve military relation extraction, achieving a 4% improvement in F1-score over traditional models.
A great deal of operational information exists in the form of text. Therefore, extracting operational information from unstructured military text is of great significance for assisting command decision making and operations. Military relation extraction is one of the main tasks of military information extraction, which aims at identifying the relation between two named entities from unstructured military texts. However, the traditional methods of extracting military relations cannot easily resolve problems such as inadequate manual features and inaccurate Chinese word segmentation in military fields, failing to make full use of symmetrical entity relations in military texts. With our approach, based on the pre-trained language model, we present a Chinese military relation extraction method, which combines the bi-directional gate recurrent unit (BiGRU) and multi-head attention mechanism (MHATT). More specifically, the conceptual foundation of our method lies in constructing an embedding layer and combining word embedding with position embedding, based on the pre-trained language model; the output vectors of BiGRU neural networks are symmetrically spliced to learn the semantic features of context, and they fuse the multi-head attention mechanism to improve the ability of expressing semantic information. On the military text corpus that we have built, we conduct extensive experiments. We demonstrate the superiority of our method over the traditional non-attention model, attention model, and improved attention model, and the comprehensive evaluation value F1-score of the model is improved by about 4%.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据