4.6 Review

Deep Multimodal Emotion Recognition on Human Speech: A Review

Journal

APPLIED SCIENCES-BASEL
Volume 11, Issue 17, Pages -

Publisher

MDPI
DOI: 10.3390/app11177962

Keywords

multimodal emotion recognition; multimodal temporal learning; multimodal signal processing; affective computing; speech emotion recognition

Ask authors/readers for more resources

This work provides a review of state of the art in multimodal speech emotion recognition methodologies, presenting a new descriptive categorization and summarizing basic feature representation methods for each modality. The aggregated evaluation results are also presented. Furthermore, the future challenges related to validation procedures, representation learning, and method robustness are analyzed in depth.
This work reviews the state of the art in multimodal speech emotion recognition methodologies, focusing on audio, text and visual information. We provide a new, descriptive categorization of methods, based on the way they handle the inter-modality and intra-modality dynamics in the temporal dimension: (i) non-temporal architectures (NTA), which do not significantly model the temporal dimension in both unimodal and multimodal interaction; (ii) pseudo-temporal architectures (PTA), which also assume an oversimplification of the temporal dimension, although in one of the unimodal or multimodal interactions; and (iii) temporal architectures (TA), which try to capture both unimodal and cross-modal temporal dependencies. In addition, we review the basic feature representation methods for each modality, and we present aggregated evaluation results on the reported methodologies. Finally, we conclude this work with an in-depth analysis of the future challenges related to validation procedures, representation learning and method robustness.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available