4.5 Article

Label-Embedding Bi-directional Attentive Model for Multi-label Text Classification

Journal

NEURAL PROCESSING LETTERS
Volume 53, Issue 1, Pages 375-389

Publisher

SPRINGER
DOI: 10.1007/s11063-020-10411-8

Keywords

Multi-label text classification; BERT; Label embedding; Bi-directional attention

Funding

  1. National Natural Science Foundation of China [U1711263]

Ask authors/readers for more resources

A Label-Embedding Bi-directional Attentive model is proposed in this paper to enhance the performance of BERT's text classification framework, and experimental results show notable improvements over baselines and state-of-the-art models on five datasets.
Multi-label text classification is a critical task in natural language processing field. As the latest language representation model, BERT obtains new state-of-the-art results in the classification task. Nevertheless, the text classification framework of BERT neglects to make full use of the token-level text representation and label embedding, since it only utilizes the final hidden state corresponding to CLS token as sequence-level text representation for classification. We assume that the finer-grained token-level text representation and label embedding contribute to classification. Consequently, in this paper, we propose a Label-Embedding Bi-directional Attentive model to improve the performance of BERT's text classification framework. In particular, we extend BERT's text classification framework with label embedding and bi-directional attention. Experimental results on the five datasets indicate that our model has notable improvements over both baselines and state-of-the-art models.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available