Journal
IMAGE AND VISION COMPUTING
Volume 30, Issue 10, Pages 738-749Publisher
ELSEVIER
DOI: 10.1016/j.imavis.2012.02.004
Keywords
Expression recognition; 3D face models; 40 face videos; Mesh registration
Categories
Funding
- Office of the Director of National Intelligence (ODNI)
- Intelligence Advanced Research Projects Activity (IARPA), through the Army Research Laboratory (ARL)
- University of Houston (UH) Eckhard Pfeiffer Endowment Fund
Ask authors/readers for more resources
Facial expression analysis has interested many researchers in the past decade due to its potential applications in various fields such as human-computer interaction, psychological studies, and facial animation. Three-dimensional facial data has been proven to be insensitive to illumination condition and head pose, and has hence gathered attention in recent years. In this paper, we focus on discrete expression classification using 3D data from the human face. The paper is divided in two parts. In the first part, we present improvement to the fitting of the Annotated Face Model (AFM) so that a dense point correspondence can be found in terms of both position and semantics among static 3D face scans or frames in 3D face sequences. Then, an expression recognition framework on static 3D images is presented. It is based on a Point Distribution Model (PDM) which can be built on different features. In the second part of this article, a systematic pipeline that operates on dynamic 3D sequences (4D datasets or 3D videos) is proposed and alternative modules are investigated as a comparative study. We evaluated both 3D and 4D Facial Expression Recognition pipelines on two publicly available facial expression databases and obtained promising results. (c) 2012 Elsevier B.V. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available