Journal
IEEE ACCESS
Volume 8, Issue -, Pages 3539-3550Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2019.2962833
Keywords
Graph convolution network; non-parametric mannequin reconstruction; anthropometric mannequin design; parametric reconstruction
Categories
Funding
- National Natural Science Foundation of China [61572124]
- Fundamental Research Funds for the Central Universities [GrantCUSF-DH-D-2017006]
Ask authors/readers for more resources
In this paper, we present a novel non-parametric method for precisely reconstructing a three dimensional (3D) virtual mannequin from anthropometric measurements and mask image(s) based on Graph Convolution Network (GCN). The proposed method avoids heavy dependence on a particular parametric body model such as SMPL or SCPAE and can predict mesh vertices directly, which is significantly more comfortable using a GCN than a typical Convolutional Neural Network (CNN). To further improve the accuracy of the reconstruction and make the reconstruction more controllable, we incorporate the anthropometric measurements into the developed GCN. Our non-parametric reconstruction results distinctly outperform the previous graph convolution method, both visually and in terms of anthropometric accuracy. We also demonstrate that the proposed network possesses the capability to reconstruct a plausible 3D mannequin from a single-view mask. The proposed method can be effortless extended to a parametric method by appending a Multilayer Perception (MLP) to regress the parametric space of the Principal Component Analysis (PCA) model to achieve 3D reconstruction as well. Extensive experimental results demonstrate that our anthropometric GCN itself is very useful in improving the reconstruction accuracy, and the proposed method is effective and robust for 3D mannequin reconstruction.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available