4.7 Article

AOBERT: All-modalities-in-One BERT for multimodal sentiment analysis

期刊

INFORMATION FUSION
卷 92, 期 -, 页码 37-45

出版社

ELSEVIER
DOI: 10.1016/j.inffus.2022.11.022

关键词

Multimodal Sentiment Analysis; Single -stream Transformer; Multimodal Masked Language Model; Alignment Prediction

向作者/读者索取更多资源

Multimodal sentiment analysis uses various modalities to predict sentiment, but traditional methods suffer from loss of intramodality and inter-modality. AOBERT, a single-stream transformer pre-trained on two tasks, achieves state-of-the-art results and addresses this problem effectively.
Multimodal sentiment analysis utilizes various modalities such as Text, Vision and Speech to predict sentiment. As these modalities have unique characteristics, methods have been developed for fusing features. However, the overall modality characteristics are not guaranteed, because traditional fusion methods have some loss of intramodality and inter-modality. To solve this problem, we introduce a single-stream transformer, All-modalities-inOne BERT (AOBERT). The model is pre-trained on two tasks simultaneously: Multimodal Masked Language Modeling (MMLM) and Alignment Prediction (AP). The dependency and relationship between modalities can be determined using two pre-training tasks. AOBERT achieved state-of-the-art results on the CMU-MOSI, CMUMOSEI, and UR-FUNNY datasets. Furthermore, ablation studies that validated combinations of modalities, effects of MMLM and AP and fusion methods confirmed the effectiveness of the proposed model.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据