期刊
IEEE TRANSACTIONS ON CYBERNETICS
卷 -, 期 -, 页码 -出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCYB.2023.3242368
关键词
Faces; Solid modeling; Shape; Image reconstruction; Face recognition; Data models; Computational modeling; 3-D dense face alignment; 3-D face reconstruction; expression synthesis; facial manipulation
In this article, a novel framework is proposed to learn personalized shapes for 3D face reconstruction. Several principles are applied to balance the facial shape and expression distribution, and a mesh editing method is used to generate face images with various expressions. The pose estimation accuracy is improved by transferring the projection parameter into Euler angles, and a weighted sampling method is proposed to improve the robustness of the training process. Experimental results demonstrate that our method achieves state-of-the-art performance.
3-D Morphable model (3DMM) has widely benefited 3-D face-involved challenges given its parametric facial geometry and appearance representation. However, previous 3-D face reconstruction methods suffer from limited power in facial expression representation due to the unbalanced training data distribution and insufficient ground-truth 3-D shapes. In this article, we propose a novel framework to learn personalized shapes so that the reconstructed model well fits the corresponding face images. Specifically, we augment the dataset following several principles to balance the facial shape and expression distribution. A mesh editing method is presented as the expression synthesizer to generate more face images with various expressions. Besides, we improve the pose estimation accuracy by transferring the projection parameter into the Euler angles. Finally, a weighted sampling method is proposed to improve the robustness of the training process, where we define the offset between the base face model and the ground-truth face model as the sampling probability of each vertex. The experiments on several challenging benchmarks have demonstrated that our method achieves state-of-the-art performance.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据