4.2 Article

Scarce Resource Dimensional Sentiment Analysis Using Domain-Distilled BERT

期刊

出版社

INST INFORMATION SCIENCE
DOI: 10.6688/JISE.202303

关键词

scarce resource; domain distillation; sentiment analysis; deep neural network; natural language processing

向作者/读者索取更多资源

This study proposes a domain-distilled method to address the data scarcity problem in dimensional sentiment analysis. By using a domain discriminator to distinguish feature domains and maximizing the prediction loss, the proposed method improves the ability to learn domain-invariant features. Experimental results show that the proposed domain-distilled BERT outperforms the original BERT and other deep learning models in predicting dimensional sentiment scores.
Considerable research has focused on dimensional sentiment analysis, which seeks to predict a real-valued sentiment score in multiple dimensions for a given sentiment ex-pression. Although state-of-the-art methods can obtain decent results with high-quality and large-scale corpora data, performance declines significantly under conditions of data scarcity. To address this data scarcity problem, this study proposes a domain-distilled method to learn domain-invariant features instead of the domain-specific features commonly used by traditional methods because learning domain-specific features under data scarcity condition may restrict coverage of the domain feature space. The proposed distillation pro-cess is accomplished using a domain discriminator to distinguish the feature's domain. In addition, the domain discriminator is trained by maximizing the prediction loss because this makes it difficult for the discriminator to distinguish among domains, thus improving its ability to learn domain-invariant features. To evaluate the proposed method, we implement the domain-distilled method in Bidirectional Encoder Representations from Transformers (BERT) due to its promising results in many natural language processing (NLP) tasks. Ex-periments on the EmoBank, a three dimensional sentiment corpus, show that the proposed domain-distilled BERT outperforms the original BERT and other deep learning models in terms of dimensional sentiment score prediction.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.2
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据