4.6 Article

Texture and Geometry Scattering Representation-Based Facial Expression Recognition in 2D+3D Videos

Publisher

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3131345

Keywords

Facial expression recognition; the scattering descriptor; 2D and 3D videos; multi-modal fusion

Funding

  1. National Key Research and Development Plan [2016YFC0801002]
  2. National Natural Science Foundation of China [61673033]
  3. State Key Laboratory of Software Development Environment [SKLSDE-2017ZX-07]
  4. Microsoft Research Asia Collaborative Program [FY17-RES-THEME-033]
  5. French Research Agency, l'Agence Nationale de Recherche (ANR) through the Jemime project [ANR-13-CORD-0004-02]
  6. PUF 4D Vision project - Partner University Foundation

Ask authors/readers for more resources

Facial Expression Recognition (FER) is one of the most important topics in the domain of computer vision and pattern recognition, and it has attracted increasing attention for its scientific challenges and application potentials. In this article, we propose a novel and effective approach to FER using multi-model two-dimensional (2D) and 3D videos, which encodes both static and dynamic clues by scattering convolution network. First, a shape-based detection method is introduced to locate the start and the end of an expression in videos; segment its onset, apex, and offset states; and sample the important frames for emotion analysis. Second, the frames in Apex of 2D videos are represented by scattering, conveying static texture details. Those of 3D videos are processed in a similar way, but to highlight static shape details, several geometric maps in terms of multiple order differential quantities, i.e., Normal Maps and Shape Index Maps, are generated as the input of scattering, instead of original smooth facial surfaces. Third, the average of neighboring samples centred at each key texture frame or shape map in Onset is computed, and the scattering features extracted from all the average samples of 2D and 3D videos are then concatenated to capture dynamic texture and shape cues, respectively. Finally, Multiple Kernel Learning is adopted to combine the features in the 2D and 3D modalities and compute similarities to predict the expression label. Thanks to the scattering descriptor, the proposed approach not only encodes distinct local texture and shape variations of different expressions as by several milestone operators, such as SIFT, HOG, and so on, but also captures subtle information hidden in high frequencies in both channels, which is quite crucial to better distinguish expressions that are easily confused. The validation is conducted on the BU-4DFE and BP-4D databa ses, and the accuracies reached are very competitive, indicating its competency for this issue.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available