4.7 Article

AOBERT: All-modalities-in-One BERT for multimodal sentiment analysis

Related references

Note: Only part of the references are listed.
Article Computer Science, Artificial Intelligence

Hybrid Contrastive Learning of Tri-Modal Representation for Multimodal Sentiment Analysis

Sijie Mai et al.

Summary: The wide application of smart devices has enabled the use of multimodal data, but training networks with cross-modal information is still challenging due to modality gap. Additionally, the learning of inter-sample and inter-class relationships is often neglected. To address these issues, we propose HyCon, a framework for hybrid contrastive learning, which can explore cross-modal interactions, learn inter-sample and inter-class relationships, and reduce the modality gap. Our method outperforms baselines on multimodal sentiment analysis and emotion recognition.

IEEE TRANSACTIONS ON AFFECTIVE COMPUTING (2023)

Article Computer Science, Artificial Intelligence

Multi-Level Fine-Scaled Sentiment Sensing with Ambivalence Handling

Zhaoxia Wang et al.

INTERNATIONAL JOURNAL OF UNCERTAINTY FUZZINESS AND KNOWLEDGE-BASED SYSTEMS (2020)