4.8 Article

Visibility-Aware Point-Based Multi-View Stereo Network

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2020.2988729

关键词

Three-dimensional displays; Image reconstruction; Geometry; Two dimensional displays; Task analysis; Aggregates; Surface reconstruction; Multi-view stereo; 3D deep learning

资金

  1. National Key Research and Development Program of China [2016YFE0206200]
  2. National Natural Science Foundation of China (NSFC) [U1613205, 51675291]
  3. NSF [IIS-1764078]
  4. DMAI

向作者/读者索取更多资源

VA-Point-MVSNet is a novel visibility-aware point-based deep framework for multi-view stereo (MVS), which directly processes the target scene as point clouds and predicts depth in a coarse-to-fine manner. The network leverages 3D geometry priors and 2D texture information effectively and processes the point cloud to estimate the 3D flow for each point.
We introduce VA-Point-MVSNet, a novel visibility-aware point-based deep framework for multi-view stereo (MVS). Distinct from existing cost volume approaches, our method directly processes the target scene as point clouds. More specifically, our method predicts the depth in a coarse-to-fine manner. We first generate a coarse depth map, convert it into a point cloud and refine the point cloud iteratively by estimating the residual between the depth of the current iteration and that of the ground truth. Our network leverages 3D geometry priors and 2D texture information jointly and effectively by fusing them into a feature-augmented point cloud, and processes the point cloud to estimate the 3D flow for each point. This point-based architecture allows higher accuracy, more computational efficiency and more flexibility than cost-volume-based counterparts. Furthermore, our visibility-aware multi-view feature aggregation allows the network to aggregate multi-view appearance cues while taking into account visibility. Experimental results show that our approach achieves a significant improvement in reconstruction quality compared with state-of-the-art methods on the DTU and the Tanks and Temples dataset. The code of VA-Point-MVSNet proposed in this work will be released at https://github.com/callmeray/PointMVSNet.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据