3.8 Proceedings Paper

TransMVSNet: Global Context-aware Multi-view Stereo Network with Transformers

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/CVPR52688.2022.00839

Keywords

-

Funding

  1. National Key R&D Plan of the Ministry of Science and Technology [2020AAA0104400]

Ask authors/readers for more resources

This paper introduces TransMVSNet, which leverages feature matching Transformer and other techniques to achieve state-of-the-art performance in multi-view stereo matching.
In this paper, we present TransMVSNet, based on our exploration of feature matching in multi-view stereo (MVS). We analogize MVS back to its nature of a feature matching task and therefore propose a powerful Feature Matching Transformer (FMT) to leverage intra- (self-) and inter(cross-) attention to aggregate long-range context information within and across images. To facilitate a better adaptation of the FMT, we leverage an Adaptive Receptive Field (ARF) module to ensure a smooth transit in scopes of features and bridge different stages with a feature pathway to pass transformed features and gradients across different scales. In addition, we apply pair-wise feature correlation to measure similarity between features, and adopt ambiguity-reducing focal loss to strengthen the supervision. To the best of our knowledge, TransMVSNet is the first attempt to leverage Transformer into the task of MVS. As a result, our method achieves state-of-the-art performance on DTU dataset, Tanks and Temples benchmark, and BlendedMVS dataset. Code is available at https://github.com/MegviiRobot/TransMVSNet.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available