期刊
NEUROCOMPUTING
卷 463, 期 -, 页码 444-453出版社
ELSEVIER
DOI: 10.1016/j.neucom.2021.07.089
关键词
3D object reconstruction; Stereo vision; Voxel; Point cloud; Neural network
资金
- Shandong Provincial Natural Science Foundation [ZR2019MF054]
- National Natural Science Foundation of China [61772158, 61872112, 61902091, U1711265]
- National Key Research and Development Program of China [2018YFC0832105]
This paper proposes a new deep learning framework to infer the 3D shape of an object from a pair of stereo images, achieving better performance than state-of-the-art methods. Additionally, a large-scale synthetic benchmarking dataset named StereoShapeNet is introduced to evaluate the reconstruction algorithms.
Inferring the complete 3D shape of an object from an RGB image has shown impressive results, however, existing methods rely primarily on recognizing the most similar 3D model from the training set to solve the problem. These methods suffer from poor generalization and may lead to low-quality reconstructions for unseen objects. Nowadays, stereo cameras are pervasive in emerging devices such as dual-lens smart-phones and robots, which enables the use of the two-view nature of stereo images to explore the 3D structure and thus improve the reconstruction performance. In this paper, we propose a new deep learn-ing framework for reconstructing the 3D shape of an object from a pair of stereo images, which reasons about the 3D structure of the object by taking bidirectional disparities and feature correspondences between the two views into account. Besides, we present a large-scale synthetic benchmarking dataset, namely StereoShapeNet, containing 1,052,976 pairs of stereo images rendered from ShapeNet along with the corresponding bidirectional depth and disparity maps. Experimental results on the StereoShapeNet benchmark demonstrate that the proposed framework outperforms the state-of-the-art methods. The project page is available at https://haozhexie.com/project/stereo-3d-reconstruction. (c) 2021 Elsevier B.V. All rights reserved.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据