4.6 Article

Using a stacked residual LSTM model for sentiment intensity prediction

期刊

NEUROCOMPUTING
卷 322, 期 -, 页码 93-101

出版社

ELSEVIER
DOI: 10.1016/j.neucom.2018.09.049

关键词

Sentiment intensity prediction; Stacked residual LSTM; Neural network; Sentiment analysis

资金

  1. National Natural Science Foundation of China [61702443, 61762091]
  2. Educational Commission of Yunnan Province of China [2017ZZX030]

向作者/读者索取更多资源

The sentiment intensity of a text indicates the strength of its association with positive sentiment, which is a continuous real-value between 0 and 1. Compared to polarity classification, predicting sentiment intensities for texts can achieve more fine-grained sentiment analyses. By introducing word embedding techniques, recent studies that use deep neural models have outperformed existing lexicon- and regression-based methods for sentiment intensity prediction. For better performance, a common way of a neural network is to add more layers in order to learn high-level features. However, when the depth increases, the network degrades and becomes more difficult to train. Since the errors between layers will be accumulated, and gradients will be vanished. To address this problem, this paper proposes a stacked residual LSTM model to predict sentiment intensity for a given text. By investigating the performances of shallow and deep architectures, we introduce a residual connection to every few LSTM layers to construct an 8-layer neural network. The residual connection can center layer gradients and propagated errors. Thus it makes the deeper network easier to optimize. This approach enables us to stack more layers of LSTM successfully for this task, which can improve the prediction accuracy of existing methods. Experimental results show that the proposed method outperforms lexicon-, regression-, and conventional NN-based methods proposed in previous studies. (C) 2018 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据