4.5 Article

A multi-stage dynamical fusion network for multimodal emotion recognition

Journal

COGNITIVE NEURODYNAMICS
Volume 17, Issue 3, Pages 671-680

Publisher

SPRINGER
DOI: 10.1007/s11571-022-09851-w

Keywords

Physiological signals; Emotion recognition; Multimodal dynamic fusion; Multi-stage fusion

Categories

Ask authors/readers for more resources

In recent years, there has been a growing interest in emotion recognition using physiological signals. However, current studies often overlook cross-modal interactions in multimodal emotion recognition. To address this issue, we proposed a multi-stage multimodal dynamical fusion network (MSMDFN) that explores the interactions among features extracted from multiple modalities. Our experiments on the DEAP dataset show that our method outperforms existing one-stage multimodal emotion recognition approaches.
In recent years, emotion recognition using physiological signals has become a popular research topic. Physiological signal can reflect the real emotional state for individual which is widely applied to emotion recognition. Multimodal signals provide more discriminative information compared with single modal which arose the interest of related researchers. However, current studies on multimodal emotion recognition normally adopt one-stage fusion method which results in the overlook of cross-modal interaction. To solve this problem, we proposed a multi-stage multimodal dynamical fusion network (MSMDFN). Through the MSMDFN, the joint representation based on cross-modal correlation is obtained. Initially, the latent and essential interactions among various features extracted independently from multiple modalities are explored based on specific manner. Subsequently, the multi-stage fusion network is designed to split the fusion procedure into multi-stages using the correlation observed before. This allows us to exploit much more fine-grained unimodal, bimodal and trimodal intercorrelations. For evaluation, the MSMDFN was verified on multimodal benchmark DEAP. The experiments indicate that our method outperforms the related one-stage multi-modal emotion recognition works.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available