4.5 Article

A Multi-Task BERT-BiLSTM-AM-CRF Strategy for Chinese Named Entity Recognition

Journal

NEURAL PROCESSING LETTERS
Volume 55, Issue 2, Pages 1209-1229

Publisher

SPRINGER
DOI: 10.1007/s11063-022-10933-3

Keywords

Named entity recognition; Multi-task learning; BERT; BiLSTM; AM

Ask authors/readers for more resources

Named entity recognition is a technology that aims to identify and mark entities with specific meanings in text. This paper proposes a multi-task intelligent processing model that utilizes machine learning and deep learning techniques, as well as context semantic information, to improve Chinese named entity recognition. The model achieves significant improvements in F1 score compared to previous single task models, demonstrating the effectiveness of multi-task learning.
Named entity recognition aims to identify and mark entities with specific meanings in text. It is a key technology to further extract entity relationships and mine other potential information in natural language processing. At present, the methods based on machine learning and deep learning have been widely used in the research of named entity recognition, but most learning models use feature extraction based on word and character level. The word preprocessing of this kind of model often ignores the context semantic information of the target word and can not realize polysemy. In addition, the loss of semantic information and limited training data greatly limit the improvement of model performance and generalization ability. In order to solve the above problems and improve the efficiency of named entity recognition technology in Chinese text, this paper constructs a multi-task BERT-BiLSTM-AM-CRF intelligent processing model, uses Bert to extract the dynamic word vector combined with context information, and inputs the results into CRF layer for decoding after further training through BiLSTM module. After attention mechanism network, the model can learn together on two Chinese datasets, Finally, CRF classifies and extracts the observation annotation sequence to get the final result. Compared with many previous single task models, the F1 score of this multi-task model in MASR and people's daily datasets has been significantly improved (0.55% and 3.41%), which demonstrates the effectiveness of multi-task learning for Chinese named entity recognition.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available