4.7 Article

3D Face From X: Learning Face Shape From Diverse Sources

Journal

IEEE TRANSACTIONS ON IMAGE PROCESSING
Volume 30, Issue -, Pages 3815-3827

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2021.3065798

Keywords

Faces; Three-dimensional displays; Solid modeling; Image reconstruction; Data models; Face recognition; Shape; Face modeling; shape from X; 3D face reconstruction

Funding

  1. National Natural Science Foundation of China [61672481]
  2. Youth Innovation Promotion Association CAS [2018495]

Ask authors/readers for more resources

This method introduces a novel approach to jointly learn 3D face parametric model and 3D face reconstruction from diverse sources, which differs from previous methods that typically only learn from one source. By utilizing training data from more sources, a more powerful face model can be learned.
We present a novel method to jointly learn a 3D face parametric model and 3D face reconstruction from diverse sources. Previous methods usually learn 3D face modeling from one kind of source, such as scanned data or in-the-wild images. Although 3D scanned data contain accurate geometric information of face shapes, the capture system is expensive and such datasets usually contain a small number of subjects. On the other hand, in-the-wild face images are easily obtained and there are a large number of facial images. However, facial images do not contain explicit geometric information. In this paper, we propose a method to learn a unified face model from diverse sources. Besides scanned face data and face images, we also utilize a large number of RGB-D images captured with an iPhone X to bridge the gap between the two sources. Experimental results demonstrate that with training data from more sources, we can learn a more powerful face model.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available