4.7 Article

Auto-FERNet: A Facial Expression Recognition Network With Architecture Search

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TNSE.2021.3083739

Keywords

Task analysis; Computer architecture; Face recognition; Search problems; Microprocessors; Training; Uncertainty; Neural network; facial expression recognition; neural architecture search

Funding

  1. Natural Science Foundation of China [61673187]
  2. Qatar National Research Fund [NPRP 9-466-1-103]

Ask authors/readers for more resources

This study introduces an automatically searched Facial Expression Recognition network Auto-FERNet, which solves the issue of inadaptability in traditional methods for FER tasks and proposes a relabeling method based on FES. Experimental results demonstrate the effectiveness of Auto-FERNet in FER tasks with high accuracy.
Deep convolutional neural networks have achieved great success in facial expression datasets both under laboratory conditions and in the wild. However, most of these related researches use general image classification networks (e.g., VGG, GoogLeNet) as backbones, which leads to inadaptability while applying to Facial Expression Recognition (FER) task, especially those in the wild. In the meantime, these manually designed networks usually have large parameter size. To tackle with these problems, we propose an appropriative and lightweight Facial Expression Recognition Network Auto-FERNet, which is automatically searched by a differentiable Neural Architecture Search (NAS) model directly on FER dataset. Furthermore, for FER datasets in the wild, we design a simple yet effective relabeling method based on Facial Expression Similarity (FES) to alleviate the uncertainty problem caused by natural factors and the subjectivity of annotators. Experiments have shown the effectiveness of the searched Auto-FERNet on FER task. Concretely, our architecture achieves a test accuracy of 73.78% on FER2013 without ensemble or extra training data. And noteworthily, experimental results on CK+ and JAFFE outperform the state-of-the-art with an accuracy of 98.89% (10 folds) and 97.14%, respectively, which also validate the robustness of our system.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available