期刊
IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS
卷 -, 期 -, 页码 -出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSS.2022.3197994
关键词
Tensors; Uncertainty; Computational modeling; Correlation; Bayes methods; Sentiment analysis; Robustness; Multimodal data; sentiment analysis; tensor analysis; uncertainty-based learning
资金
- National Natural Science Foundation of China [62072256]
- Postgraduate Research and Practice Innovation Program of Jiangsu Province, China [KYCX21_0740]
This article proposes a novel deep tensor evidence fusion (DTEF) network for multimodal sentiment classification. By extracting rich intermodal and intramodal information, utilizing a time cue evaluation network, and incorporating uncertainty through a trusted fusion layer, the proposed network improves the accuracy and robustness of sentiment classification.
Recently, a multimodal sentiment analysis of social media has attracted increasing attention, and its core idea is to discovery heuristic fusion strategy to analyze the sentiment orientations over heterogeneous multimodal source from a learned compact multimodal representation. The existing multimodal fusion techniques not only struggle to achieve full heterogeneous data interaction, but also they are unable to dynamically assess the quality of various modal data to determine predictability. In this article, we present a novel deep tensor evidence fusion (DTEF) network for multimodal sentiment classification. First, we propose a common view evaluation network that uses a long short-term memory (LSTM) network and a tensor-based neural network to extract rich intermodal and intramodal information. Then, we propose a unique time cue evaluation network that takes advantage of the temporal granularity associated with numerous pattern sequences. To make reliable decisions, we finally incorporate uncertainty through the trusted fusion layer, which improves the accuracy and robustness of sentimental classification. Our model is validated using the CMU Multimodal Opinion Sentiment and Emotion Intensity (CMU-MOSEI) and CMU Multimodal Corpus of Sentiment Intensity (CMU-MOSI) datasets, and the experimental findings demonstrate the superior performance of the proposed network in terms of accuracy compared with the state-of-the-art methods.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据