3.8 Proceedings Paper

Self-supervised Learning of Depth Inference for Multi-view Stereo

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/CVPR46437.2021.00744

Keywords

-

Funding

  1. Australian Research Council [DE180100628, DP200102274]
  2. Australian Research Council [DE180100628, DP200102274] Funding Source: Australian Research Council

Ask authors/readers for more resources

The paper introduces a self-supervised learning framework for multi-view stereo, which leverages pseudo labels from input data to estimate depth information. Experimental results demonstrate that the proposed method outperforms existing unsupervised multi-view stereo networks on the DTU dataset.
Recent supervised multi-view depth estimation networks have achieved promising results. Similar to all supervised approaches, these networks require ground-truth data during training. However, collecting a large amount of multi-view depth data is very challenging. Here, we propose a self-supervised learning framework for multi-view stereo that exploit pseudo labels from the input data. We start by learning to estimate depth maps as initial pseudo labels under an unsupervised learning framework relying on image reconstruction loss as supervision. We then refine the initial pseudo labels using a carefully designed pipeline leveraging depth information inferred from a higher resolution image and neighboring views. We use these high-quality pseudo labels as the supervision signal to train the network and improve, iteratively, its performance by self-training. Extensive experiments on the DTU dataset show that our proposed self-supervised learning framework outperforms existing unsupervised multi-view stereo networks by a large margin and performs on par compared to the supervised counterpart. Code is available at https://github.com/JiayuYANG/Self-supervised-CVP-MVSNet.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available