4.5 Article

Action Unit Driven Facial Expression Synthesis from a Single Image with Patch Attentive GAN

Journal

COMPUTER GRAPHICS FORUM
Volume 40, Issue 6, Pages 47-61

Publisher

WILEY
DOI: 10.1111/cgf.14202

Keywords

facial animation; animation; image; video editing; image and video processing; image‐ based rendering; rendering

Funding

  1. Chinese Scholarship Council (CSC) [201506290085]
  2. Shaanxi Provincial International Science and Technology Collaboration Project [2017KW-ZD-14]
  3. Agency for Innovation by Science and Technology in Flanders (IWT) [131814]
  4. VUB Interdisciplinary Research Program through the EMO-App project

Ask authors/readers for more resources

This paper proposes a novel synthesis-by-analysis approach that leverages GAN framework and state-of-the-art AU detection model to achieve better AU-driven facial expression generation. By designing a novel discriminator architecture and introducing a balanced sampling approach, the experimental results show that our method outperforms the state-of-the-art in terms of realism and expressiveness of facial expressions.
Recent advances in generative adversarial networks (GANs) have shown tremendous success for facial expression generation tasks. However, generating vivid and expressive facial expressions at Action Units (AUs) level is still challenging, due to the fact that automatic facial expression analysis for AU intensity itself is an unsolved difficult task. In this paper, we propose a novel synthesis-by-analysis approach by leveraging the power of GAN framework and state-of-the-art AU detection model to achieve better results for AU-driven facial expression generation. Specifically, we design a novel discriminator architecture by modifying the patch-attentive AU detection network for AU intensity estimation and combine it with a global image encoder for adversarial learning to force the generator to produce more expressive and realistic facial images. We also introduce a balanced sampling approach to alleviate the imbalanced learning problem for AU synthesis. Extensive experimental results on DISFA and DISFA+ show that our approach outperforms the state-of-the-art in terms of photo-realism and expressiveness of the facial expression quantitatively and qualitatively.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available