4.7 Article

Learning to Reduce Scale Differences for Large-Scale Invariant Image Matching

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSVT.2022.3210602

Keywords

Image matching; Convolutional neural networks; Task analysis; large scale changes; scale difference reduction; scale ratio estimation; covisibility-attention-reinforced matching module

Ask authors/readers for more resources

Most image matching methods fail to handle large scale changes in images. To address this issue, we propose a Scale-Difference-Aware Image Matching method (SDAIM) that reduces image scale differences before local feature extraction. Our proposed Covisibility-Attention-Reinforced Matching module (CVARM) accurately estimates the scale ratio for SDAIM, and the Scale-Net improves upon existing scale ratio estimation methods. Quantitative and qualitative experiments demonstrate that Scale-Net has higher accuracy and better generalization ability. Moreover, SDAIM and Scale-Net significantly enhance the performance of local feature matching methods.
Most image matching methods perform poorly when encountering large scale changes in images. To solve this problem, we propose a Scale-Difference-Aware Image Matching method (SDAIM) that reduces image scale differences before local feature extraction, via resizing both images of an image pair according to an estimated scale ratio. In order to accurately estimate the scale ratio for the proposed SDAIM, we propose a Covisibility-Attention-Reinforced Matching module (CVARM) and then design a novel neural network, termed as Scale-Net, based on CVARM. The proposed CVARM can lay more stress on covisible areas within the image pair and suppress the distraction from those areas visible in only one image. Quantitative and qualitative experiments confirm that the proposed Scale-Net has higher scale ratio estimation accuracy and much better generalization ability compared with all the existing scale ratio estimation methods. Further experiments on image matching and relative pose estimation tasks demonstrate that our SDAIM and Scale-Net are able to greatly boost the performance of representative local features and state-of-the-art local feature matching methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available