4.7 Article

FLEPNet: Feature Level Ensemble Parallel Network for Facial Expression Recognition

Journal

IEEE TRANSACTIONS ON AFFECTIVE COMPUTING
Volume 13, Issue 4, Pages 2058-2070

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TAFFC.2022.3208309

Keywords

Facial expression recognition; texture features; illumination normalization; parallel network; facial expression classification

Funding

  1. project Smart Solutions in Ubiqui-tous Computing Environments, Grant Agency of Excellence [UHKFIM-GE-2022]
  2. SPEV project Smart Solutions in Ubiquitous Computing Environments [UHK-FIMSPEV-2022-2102]
  3. University of Hradec Kralove, Faculty of Informatics and Management, Czech Republic

Ask authors/readers for more resources

With the development of deep learning, the research on facial expression recognition (FER) has gained significant attention. This study proposes a texture-based feature-level ensemble parallel network (FLEPNet) to address the challenges in FER, and experimental results demonstrate its effectiveness and reliability.
With the advent of deep learning, the research on facial expression recognition (FER) has received a lot of interest. Different deep convolutional neural network (DCNN) architectures have been developed for real-time and efficient FER. One of the challenges in FER is obtaining trustworthy features that are strongly associated with facial expression changes. Furthermore, traditional DCNNs for FER problems have two significant issues: insufficient training data, which leads to overfitting, and intra-class facial appearance variations. FLEPNet, a texture-based feature-level ensemble parallel network for FER, is proposed in this study and proved to solve the aforementioned problems. Our parallel network FLEPNet uses multi-scale convolutional and multi-scale residual block-based DCNN as building blocks. First, we consider modified homomorphic filtering to normalize the illumination effectively, which minimizes the intra-class difference. The deep networks are then protected against having insufficient training data by using texture analysis on face expression images to identify multiple attributes. Four texture features are extracted and combined with the image's original characteristics. Finally, the integrated features retrieved by two networks are used to classify seven facial expressions. Experimental results reveal that the proposed technique achieves an average accuracy of 0.9914, 0.9894, 0.9796, 0.8756, and 0.8072 on Japanese Female Facial Expressions, Extended CohnKanade, Karolinska Directed Emotional Faces, Real-world Affective Face Database, and Facial Expression Recognition 2013 databases, respectively. Moreover, experimental outcomes depict significant reliability when compared to competing approaches.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available