4.6 Article

Sentiment Analysis Using Stacked Gated Recurrent Unit for Arabic Tweets

期刊

IEEE ACCESS
卷 9, 期 -, 页码 137176-137187

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2021.3114313

关键词

Sentiment analysis; Analytical models; Social networking (online); Transformers; Deep learning; Convolutional neural networks; Stacking; Artificial intelligence; deep learning; natural language processing; recurrent neural networks; sentiment analysis

资金

  1. Deanship of Scientic Research at Imam Mohammad Ibn Saud Islamic University through the Graduate Students Research Support Program

向作者/读者索取更多资源

Over the past decade, the amount of Arabic content on websites and social media has increased significantly, allowing for rich sources for trend analysis through natural language processing tasks like sentiment analysis. Deep learning techniques, such as GRU and SBi-GRU, have been utilized to improve accuracy in analyzing unstructured data. Research has proposed neural models and ensemble methods for Arabic NLP, with the use of automatic sentiment refinement to discard stop words and achieve high accuracy in sentiment classification.
Over the last decade, the amount of Arabic content created on websites and social media has grown significantly. Opinions are shared openly and freely on social media, a process that provides a rich source for trend analyses. These analyses could be accomplished artificially by natural language processing tasks, such as sentiment analysis. Those tasks are implemented initially using machine learning. Due to its accuracy in studying unstructured data, deep learning has been increasingly used as well. The gated recurrent unit (GRU) is a promising approach in textual analysis and exhibits large morphological variations. We propose two neural models, i.e., the stacked gated recurrent unit (SGRU) and stacked bidirectional gated recurrent unit (SBi-GRU), with word embedding to mine Arabic opinions. We also propose a new way of discarding stop words using automatic sentiment refinement (ASR) instead of using manually collected stop words or using low quality available Arabic stop words' lists. The performance of our proposed models is compared with that of long short-term memory (LSTM), the support vector machine (SVM), and the most recent pretrained Arabic bidirectional encoder representations from transformers (AraBERT). In addition, we compare our models' performance to that of an ensemble architecture of the abovementioned models to find the best model architecture for Arabic natural language processing (NLP). To the best of our knowledge, no previous studies have applied either the unidirectional or bidirectional SGRU for Arabic sentiment classification. Furthermore, no ensemble models have been implemented from these architectures for the Arabic language. The results show that the six-layer SGRU stacking and five-layer SBi-GRU stacking achieve the highest accuracy and that the ensemble method outperforms all other models, with an accuracy exceeding 90%.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据