4.8 Article

SurRF: Unsupervised Multi-View Stereopsis by Learning Surface Radiance Field

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2021.3116695

Keywords

Three-dimensional displays; Surface reconstruction; Surface texture; Geometry; Image reconstruction; Shape; Rendering (computer graphics); Multi-view stereopsis; unsupervised learning; neural rendering

Funding

  1. Natural Science Foundation of China (NSFC) [61860206003, 62088102]
  2. Shenzhen Science and Technology Research and Development Funds [JCYJ20180507183706645]
  3. Beijing National Research Center for Information Science and Technology (BNRist) [BNR2019TD01022, BNR2020RC01002]
  4. China Postdoctoral Science Foundation [2020TQ0172, 2020M670338]

Ask authors/readers for more resources

This paper proposes SurRF, an unsupervised multi-view stereopsis pipeline that learns Surface Radiance Field. By defining the radiance field on a continuous and explicit 2D surface, SurRF provides a compact representation while maintaining complete shape and realistic texture, leading to competitive results for large-scale complex scenes.
The recent success in supervised multi-view stereopsis (MVS) relies on the onerously collected real-world 3D data. While the latest differentiable rendering techniques enable unsupervised MVS, they are restricted to discretized (e.g., point cloud) or implicit geometric representation, suffering from either low integrity for a textureless region or less geometric details for complex scenes. In this paper, we propose SurRF, an unsupervised MVS pipeline by learning Surface Radiance Field, i.e., a radiance field defined on a continuous and explicit 2D surface. Our key insight is that, in a local region, the explicit surface can be gradually deformed from a continuous initialization along view-dependent camera rays by differentiable rendering. That enables us to define the radiance field only on a 2D deformable surface rather than in a dense volume of 3D space, leading to compact representation while maintaining complete shape and realistic texture for large-scale complex scenes. We experimentally demonstrate that the proposed SurRF produces competitive results over the-state-of-the-art on various real-world challenging scenes, without any 3D supervision. Moreover, SurRF shows great potential in owning the joint advantages of mesh (scene manipulation), continuous surface (high geometric resolution), and radiance field (realistic rendering).

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available