4.5 Article

A BERT-Based Hybrid Short Text Classification Model Incorporating CNN and Attention-Based BiGRU

出版社

IGI GLOBAL
DOI: 10.4018/JOEUC.294580

关键词

Deep Learning; Fusion Framework; Natural Language Processing; Short Text Classification

资金

  1. National Social Science Foundation of China [19BTQ032]

向作者/读者索取更多资源

The paper proposes a feature fusion framework based on BERT, where CNN captures static features, BiGRU captures contextual features, and an attention mechanism assigns weight to salient words. Experimental results show that the model outperforms other state-of-the-art baseline methods.
Short text classification is a research focus for natural language processing (NLP), which is widely used in news classification, sentiment analysis, mail filtering, and other fields. In recent years, deep learning techniques are applied to text classification and have made some progress. Different from ordinary text classification, short text has the problem of less vocabulary and feature sparsity, which raise higher request for text semantic feature representation. To address this issue, this paper proposes a feature fusion framework based on the bidirectional encoder representations from transformers (BERT). In this hybrid method, BERT is used to train word vector representation. Convolutional neural network (CNN) captures static features. As a supplement, a bi-gated recurrent neural network (BiGRU) is adopted to capture contextual features. Furthermore, an attention mechanism is introduced to assign the weight of salient words. The experimental results confirmed that the proposed model significantly outperforms the other state-of-the-art baseline methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据