4.8 Article

HandVoxNet++: 3D Hand Shape and Pose Estimation Using Voxel-Based Neural Networks

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2021.3122874

关键词

3D hand shape and pose from a single depth map; voxelized hand shape; graph convolutions; TSDF; 3D data augmentation; shape registration; GCN-MeshReg; NRGA plus

资金

  1. German Federal Ministry of Education and Research [01IW18002, 01IW21001]
  2. ERC [770784]

向作者/读者索取更多资源

Estimating 3D hand shape and pose from a single depth map is a challenging problem. To overcome the limitations of existing methods, researchers propose HandVoxNet++, a new deep network that combines two hand shape representations. Through extensive evaluations, HandVoxNet++ achieves state-of-the-art performance on public benchmark datasets.
3D hand shape and pose estimation from a single depth map is a new and challenging computer vision problem with many applications. Existing methods addressing it directly regress hand meshes via 2D convolutional neural networks, which leads to artifacts due to perspective distortions in the images. To address the limitations of the existing methods, we develop HandVoxNet++, i.e., a voxel-based deep network with 3D and graph convolutions trained in a fully supervised manner. The input to our network is a 3D voxelized-depth-map-based on the truncated signed distance function (TSDF). HandVoxNet++ relies on two hand shape representations. The first one is the 3D voxelized grid of hand shape, which does not preserve the mesh topology and which is the most accurate representation. The second representation is the hand surface that preserves the mesh topology. We combine the advantages of both representations by aligning the hand surface to the voxelized hand shape either with a new neural Graph-Convolutions-based Mesh Registration (GCN-MeshReg) or classical segment-wise Non-Rigid Gravitational Approach (NRGA++) which does not rely on training data. In extensive evaluations on three public benchmarks, i.e., SynHand5M, depth-based HANDS19 challenge and HO-3D, the proposed HandVoxNet++ achieves the state-of-the-art performance. In this journal extension of our previous approach presented at CVPR 2020, we gain 41.09% and 13.7% higher shape alignment accuracy on SynHand5M and HANDS19 datasets, respectively. Our method is ranked first on the HANDS19 challenge dataset (Task 1: Depth-Based 3D Hand Pose Estimation) at the moment of the submission of our results to the portal in August 2020.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据