4.8 Article

Dynamic Facial Expression Generation on Hilbert Hypersphere With Conditional Wasserstein Generative Adversarial Nets

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2020.3002500

关键词

Face; Gallium nitride; Generative adversarial networks; Videos; Dynamics; Geometry; Training; Facial expression generation; conditional manifold-valued wasserstein generative adversarial networks; facial landmarks; Riemannian geometry

资金

  1. CNRST's Scholarship of Excellence (Morocco)
  2. CAMPUS FRANCE [41539RH]
  3. National Agency for Research (ANR) under the Investments for the future program [ANR-16-IDEX-0004 ULNE]

向作者/读者索取更多资源

The paper proposes a novel approach using a manifold-valued Wasserstein GAN to generate videos of the six basic facial expressions. By modeling the facial landmarks motion on a hypersphere, the approach can generate realistic videos with continuous motion and identity preservation.
In this work, we propose a novel approach for generating videos of the six basic facial expressions given a neutral face image. We propose to exploit the face geometry by modeling the facial landmarks motion as curves encoded as points on a hypersphere. By proposing a conditional version of manifold-valued Wasserstein generative adversarial network (GAN) for motion generation on the hypersphere, we learn the distribution of facial expression dynamics of different classes, from which we synthesize new facial expression motions. The resulting motions can be transformed to sequences of landmarks and then to images sequences by editing the texture information using another conditional Generative Adversarial Network. To the best of our knowledge, this is the first work that explores manifold-valued representations with GAN to address the problem of dynamic facial expression generation. We evaluate our proposed approach both quantitatively and qualitatively on two public datasets; Oulu-CASIA and MUG Facial Expression. Our experimental results demonstrate the effectiveness of our approach in generating realistic videos with continuous motion, realistic appearance and identity preservation. We also show the efficiency of our framework for dynamic facial expressions generation, dynamic facial expression transfer and data augmentation for training improved emotion recognition models.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据