4.6 Article

A Neural Relation Extraction Model for Distant Supervision in Counter-Terrorism Scenario

期刊

IEEE ACCESS
卷 8, 期 -, 页码 225088-225096

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2020.3042672

关键词

Bit error rate; Data mining; Encoding; Feature extraction; Data models; Big Data; Training; BERT; relation extraction; distant supervision; selective attention mechanism; BERT entity encoding

资金

  1. National Key Research and Development Program of China [2017YFC0803700]
  2. People's Public Security University of China through 2019 Basic Research Operating Expenses New Teacher Research Startup Fund Project [2019JKF424]
  3. Natural Science Foundation of China [41971367]

向作者/读者索取更多资源

Natural language processing (NLP) is the best solution to extensive, unstructured, complex, and diverse network big data for counter-terrorism. Through the text analysis, it is the basis and the most critical step to quickly extract the relationship between the relevant entities pairs in terrorism. Relation extraction lays a foundation for constructing a knowledge graph (KG) of terrorism and provides technical support for intelligence analysis and prediction. This paper takes the distant-supervised relation extraction as the starting point, breaks the limitation of artificial data annotation. Combining the Bidirectional Encoder Representation from Transformers (BERT) pre-training model and the sentence-level attention over multiple instances, we proposed the relation extraction model named BERT-att. Experiments show that our model is more efficient and better than the current leading baseline model over each evaluative metrics. Our model applied to the construction of anti-terrorism knowledge map, it used in regional security risk assessment, terrorist event prediction and other scenarios.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据