4.7 Article

AN AUDIO VISUAL EMOTION RECOGNITION SYSTEM USING DEEP LEARNING FUSION FOR A COGNITIVE WIRELESS FRAMEWORK

Journal

IEEE WIRELESS COMMUNICATIONS
Volume 26, Issue 3, Pages 62-68

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/MWC.2019.1800419

Keywords

-

Funding

  1. Deanship of Scientific Research at King Saud University, Riyadh, Saudi Arabia [RG-1436-023]

Ask authors/readers for more resources

Automatically recognizing emotions of patients can be a good facilitator of a connected healthcare framework. It can give automatic feedback to the stakeholders of the healthcare industry about patients' states and satisfaction levels. In this article, we propose an automatic audio-visual emotion recognition system in a connected healthcare framework. The system uses a 2D CNN model for the speech modality and a 3D CNN model for the visual modality. For the speech signal, preprocessing is done to extract the PS-PA feature vector. The features from the two CNN models are blended by two ELM networks. The first ELM is trained with gender-specific data, while the other one is trained with emotion-specific data. The proposed system is evaluated using three databases, and the experiments prove the success of the system. In the healthcare framework, we use edge computing prior to intensive-processing cloud computing. In the edge computing, we realize edge caching, which can store the CNN model parameters and thereby perform the testing fast.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available