4.8 Article

Depth-Guided Optimization of Neural Radiance Fields for Indoor Multi-View Stereo

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2023.3263464

Keywords

Depth estimation; 3D reconstruction; multi-view stereo; neural radiance fields

Ask authors/readers for more resources

In this work, a new multi-view depth estimation method called NerfingMVS is presented, which combines conventional reconstruction and learning-based priors with neural radiance fields (NeRF). It directly optimizes over implicit volumes, eliminating the need for pixel matching in indoor scenes. The key is using learning-based priors to guide the optimization process of NeRF. The proposed method achieves state-of-the-art performances and improves rendering quality on both seen and novel views.
In this work, we present a new multi-view depth estimation method NerfingMVS that utilizes both conventional reconstruction and learning-based priors over the recently proposed neural radiance fields (NeRF). Unlike existing neural network based optimization method that relies on estimated correspondences, our method directly optimizes over implicit volumes, eliminating the challenging step of matching pixels in indoor scenes. The key to our approach is to utilize the learning-based priors to guide the optimization process of NeRF. Our system first adapts a monocular depth network over the target scene by finetuning on its MVS reconstruction from COLMAP. Then, we show that the shape-radiance ambiguity of NeRF still exists in indoor environments and propose to address the issue by employing the adapted depth priors to monitor the sampling process of volume rendering. Finally, a per-pixel confidence map acquired by error computation on the rendered image can be used to further improve the depth quality. We further present NerfingMVS++, where a coarse-to-fine depth priors training strategy is proposed to directly utilize sparse SfM points and the uniform sampling is replaced by Gaussian sampling to boost the performance. Experiments show that our NerfingMVS and its extension NerfingMVS++ achieve state-of-the-art performances on indoor datasets ScanNet and NYU Depth V2. In addition, we show that the guided optimization scheme does not sacrifice the original synthesis capability of neural radiance fields, improving the rendering quality on both seen and novel views. Code is available at https://github.com/weiyithu/NerfingMVS.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available