4.5 Article Proceedings Paper

Scene-Aware 3D Multi-Human Motion Capture from a Single Camera

期刊

COMPUTER GRAPHICS FORUM
卷 42, 期 2, 页码 371-383

出版社

WILEY
DOI: 10.1111/cgf.14768

关键词

-

向作者/读者索取更多资源

This work focuses on estimating the 3D position, body shape, and articulation of multiple humans from a single RGB video with a static camera. The proposed approach leverages pre-trained models for various modalities and introduces a non-linear optimization-based method to jointly solve for the 3D position, articulated pose, individual shapes, and scene scale. The method is evaluated on benchmark datasets and demonstrates robustness to challenging in-the-wild conditions.
In this work, we consider the problem of estimating the 3D position of multiple humans in a scene as well as their body shape and articulation from a single RGB video recorded with a static camera. In contrast to expensive marker-based or multi-view systems, our lightweight setup is ideal for private users as it enables an affordable 3D motion capture that is easy to install and does not require expert knowledge. To deal with this challenging setting, we leverage recent advances in computer vision using large-scale pre-trained models for a variety of modalities, including 2D body joints, joint angles, normalized disparity maps, and human segmentation masks. Thus, we introduce the first non-linear optimization-based approach that jointly solves for the 3D position of each human, their articulated pose, their individual shapes as well as the scale of the scene. In particular, we estimate the scene depth and person scale from normalized disparity predictions using the 2D body joints and joint angles. Given the per-frame scene depth, we reconstruct a point-cloud of the static scene in 3D space. Finally, given the per-frame 3D estimates of the humans and scene point-cloud, we perform a space-time coherent optimization over the video to ensure temporal, spatial and physical plausibility. We evaluate our method on established multi-person 3D human pose benchmarks where we consistently outperform previous methods and we qualitatively demonstrate that our method is robust to in-the-wild conditions including challenging scenes with people of different sizes. Code: https://github.com/dluvizon/scene-aware-3d-multi-human

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据