3.8 Proceedings Paper

PatchMatch-RL: Deep MVS with Pixelwise Depth, Normal, and Visibility

Publisher

IEEE
DOI: 10.1109/ICCV48922.2021.00610

Keywords

-

Funding

  1. ONR MURI Award [N00014-16-1-2007]

Ask authors/readers for more resources

In this paper, an end-to-end trainable PatchMatch-based MVS approach is proposed, combining the advantages of trainable costs and regularizations with pixelwise estimates. Through reinforcement learning, the non-differentiable PatchMatch optimization is optimized, achieving satisfactory results.
Recent learning-based multi-view stereo (MVS) methods show excellent performance with dense cameras and small depth ranges. However, non-learning based approaches still outperform for scenes with large depth ranges and sparser wide-baseline views, in part due to their PatchMatch optimization over pixelwise estimates of depth, normals, and visibility. In this paper, we propose an end-to-end trainable PatchMatch-based MVS approach that combines advantages of trainable costs and regularizations with pixelwise estimates. To overcome the challenge of the non-differentiable PatchMatch optimization that involves iterative sampling and hard decisions, we use reinforcement learning to minimize expected photometric cost and maximize likelihood of ground truth depth and normals. We incorporate normal estimation by using dilated patch kernels and propose a recurrent cost regularization that applies beyond frontal plane-sweep algorithms to our pixelwise depth/normal estimates. We evaluate our method on widely used MVS benchmarks, ETH3D and Tanks and Temples (TnT). On ETH3D, our method outperforms other recent learning-based approaches and performs comparably on advanced TnT.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available