4.7 Article

AlignBodyNet: Deep Learning-Based Alignment of Non-Overlapping Partial Body Point Clouds From a Single Depth Camera

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIM.2022.3222501

关键词

Point cloud compression; Three-dimensional displays; Cameras; Solid modeling; Shape; Deep learning; Task analysis; 3-D scanning; deep learning on point clouds; iterative closest point (ICP); non-overlapping registration; partial registration; virtual correspondence

向作者/读者索取更多资源

This article proposes a novel deep learning framework for generating omnidirectional 3-D point clouds of human bodies by registering front- and back-facing partial scans. The method does not require calibration-assisting devices or assumptions on initial alignment or correspondences. The approach builds virtual correspondences for the partial scans and predicts the rigid transformation between them through deep neural networks. Experiments show that the proposed method achieves state-of-the-art performance in both objective and subjective terms.
This article proposes a novel deep learning framework to generate omnidirectional 3-D point clouds of human bodies by registering the front- and back-facing partial scans captured by a single-depth camera. Our approach does not require calibration-assisting devices, canonical postures, nor does it make assumptions concerning an initial alignment or correspondences between the partial scans. This is achieved by factoring this challenging problem into: 1) building virtual correspondences for partial scans and 2) implicitly predicting the rigid transformation between the two partial scans via the predicted virtual correspondences. In this study, we regress the skinned multi-person linear model (SMPL) vertices from the two partial scans for building virtual correspondences. The main challenges are: 1) estimating the body shape and pose under clothing from single partially dressed body point clouds and 2) the predicted bodies from the front- and back-facing inputs required to be the same. We, thus, propose a novel deep neural network (DNN) dubbed AlignBodyNet that introduces shape-interrelated features and a shape-constraint loss for resolving this problem. We also provide a simple yet efficient method for generating real-world partial scans from complete models, which fills the gap in the lack of quantitative comparisons based on real-world data for various studies including partial registration, shape completion, and view synthesis. Experiments based on synthetic and real-world data show that our method achieves state-of-the-art performance in both objective and subjective terms.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据