4.7 Article

Comparative evaluation of 3D vs. 2D modality for automatic detection of facial action units

Journal

PATTERN RECOGNITION
Volume 45, Issue 2, Pages 767-782

Publisher

ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2011.07.022

Keywords

3D expression recognition; 3D facial expression database; Action unit detection; Facial action coding system; Modality fusion; Gabor wavelets

Funding

  1. Bogazici University [09HA202D]
  2. TUBITAK [107E001]
  3. Turkish State Planning Organization (DPT) [2007K120610]

Ask authors/readers for more resources

Automatic detection of facial expressions attracts great attention due to its potential applications in human-computer interaction as well as in human facial behavior research. Most of the research has so far been performed in 2D. However, as the limitations of 2D data are understood, expression analysis research is being pursued in 3D face modality. 3D can capture true facial surface data and is less disturbed by illumination and head pose. At this junction we have conducted a comparative evaluation of 3D and 2D face modalities. We have investigated extensively 25 action units (AU) defined in the Facial Action Coding System. For fairness we map facial surface geometry into 2D and apply totally data-driven techniques in order to avoid biases due to design. We have demonstrated that overall 3D data performs better, especially for lower face AUs and that there is room for improvement by fusion of 2D and 3D modalities. Our study involves the determination of the best feature set from 2D and 3D modalities, and of the most effective classifier, both from several alternatives. Our detailed analysis puts into evidence the merits and some shortcomings of 3D modality over 2D in classifying facial expressions from single images. (C) 2011 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available