3.8 Proceedings Paper

LEAP: Learning Articulated Occupancy of People

出版社

IEEE COMPUTER SOC
DOI: 10.1109/CVPR46437.2021.01032

关键词

-

向作者/读者索取更多资源

Modeling rigid 3D objects using deep implicit representations has seen substantial progress, but extending these methods to human shape modeling is still in its infancy. The key challenge is learning a representation that can generalize body shape deformations for unseen subjects in unseen poses. LEAP is introduced as a novel neural occupancy representation of the human body, which greatly improves generalization capability and outperforms existing solutions.
Substantial progress has been made on modeling rigid 3D objects using deep implicit representations. Yet, extending these methods to learn neural models of human shape is still in its infancy. Human bodies are complex and the key challenge is to learn a representation that generalizes such that it can express body shape deformations for unseen subjects in unseen, highly-articulated, poses. To address this challenge, we introduce LEAP (LEarning Articulated occupancy of People), a novel neural occupancy representation of the human body. Given a set of bone transformations (i.e. joint locations and rotations) and a query point in space, LEAP first maps the query point to a canonical space via learned linear blend skinning (LBS) functions and then efficiently queries the occupancy value via an occupancy network that models accurate identity- and pose-dependent deformations in the canonical space. Experiments show that our canonicalized occupancy estimation with the learned LBS functions greatly improves the generalization capability of the learned occupancy representation across various human shapes and poses, outperforming existing solutions in all settings.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据