3.8 Proceedings Paper

Rethinking Depth Estimation for Multi-View Stereo: A Unified Representation

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/CVPR52688.2022.00845

Keywords

-

Funding

  1. National Natural Science Foundation of China [62072013, U21B2012]
  2. Shenzhen Cultivation of Excellent Scientific and Technological Innovation [RCJC20200714114435057]
  3. Shenzhen Fundamental Research Program [GXWD2020123116580700720200806163656003]
  4. Shenzhen Research Projects [JCYJ20180503182128089, 201806080921419290]

Ask authors/readers for more resources

This paper proposes a novel representation method for depth estimation, called Unification, which combines the advantages of regression and classification. A new loss function is designed to address the challenge of sample imbalance. Experimental results show that our model outperforms other methods on different datasets and demonstrates the best generalization ability.
Depth estimation is solved as a regression or classification problem in existing learning-based multi-view stereo methods. Although these two representations have recently demonstrated their excellent performance, they still have apparent shortcomings, e.g., regression methods tend to overfit due to the indirect learning cost volume, and classification methods cannot directly infer the exact depth due to its discrete prediction. In this paper, we propose a novel representation, termed Unification, to unify the advantages of regression and classification. It can directly constrain the cost volume like classification methods, but also realize the sub-pixel depth prediction like regression methods. To excavate the potential of unification, we design a new loss function named Unified Focal Loss, which is more uniform and reasonable to combat the challenge of sample imbalance. Combining these two unburdened modules, we present a coarse-to-fine framework, that we call UniMVSNet. The results of ranking first on both DTU and Tanks and Temples benchmarks verify that our model not only performs the best but also has the best generalization ability.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available