3.8 Proceedings Paper

Riggable 3D Face Reconstruction via In-Network Optimization

出版社

IEEE COMPUTER SOC
DOI: 10.1109/CVPR46437.2021.00615

关键词

-

向作者/读者索取更多资源

This paper introduces a method for riggable 3D face reconstruction from monocular images, utilizing a trainable network to estimate personalized face rig and per-image parameters, achieving beyond static reconstructions and supporting downstream applications such as video retargeting. The network utilizes in-network optimization to enforce constraints and data-driven priors to constrain the ill-posed monocular setting, leading to state-of-the-art reconstruction accuracy, robustness, and generalization ability.
This paper presents a method for riggable 3D face reconstruction from monocular images, which jointly estimates a personalized face rig and per-image parameters including expressions, poses, and illuminations. lb achieve this goal, we design an end-to-end trainable network embedded with a differentiable in-network optimization. The network first parameterizes the face rig as a compact latent code with a neural decoder, and then estimates the latent code as well as per-image parameters via a learnable optimization. By estimating a personalized face rig, our method goes beyond static reconstructions and enables downstream applications such as video retargeting. In-network optimization explicitly enforces constraints derived from the first principles, thus introduces additional priors than regression-based methods. Finally, data-driven priors from deep learning are utilized to constrain the ill-posed monocular setting and ease the optimization difficulty. Experiments demonstrate that our method achieves SOTA reconstruction accuracy, reasonable robustness and generalization ability, and supports standard face rig applications.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据