4.7 Article

Multimodal Image Fusion Framework for End-to-End Remote Sensing Image Registration

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2023.3247642

Keywords

Remote sensing; Feature extraction; Image matching; Image registration; Task analysis; Convolutional neural networks; Image fusion; End-to-end registration; multimodal fusion; remote sensing image; spatial transformer networks

Ask authors/readers for more resources

In this article, we propose a multimodal image fusion network with self-attention to merge the feature representation of the reference and sensed images. The integration information is then utilized to regress the prescribed points' displacement parameters to get PTM between the reference and sensed images. Finally, PTM is supplied into the spatial transformation network (STN), which warps the sensed image to the same coordinates as the reference image, achieving end-to-end matching. Our method is validated by qualitative and quantitative experimental results on multimodal remote sensing image matching tasks.
We formulate the registration as a function that maps the input reference and sensed images to eight displacement parameters between prescribed matching points, as opposed to the usual techniques (feature extraction-description-matching-geometric restrictions). The projection transformation matrix (PTM) is then computed in the neural network and used to warp the sensed image, uniting all matching tasks under one framework. In this article, we offer a multimodal image fusion network with self-attention to merge the feature representation of the reference and sensed images. The integration information is then utilized to regress the prescribed points' displacement parameters to get PTM between the reference and sensed images. Finally, PTM is supplied into the spatial transformation network (STN), which warps the sensed image to the same coordinates as the reference image, achieving end-to-end matching. In addition, a dual-supervised loss function is proposed to optimize the network from both the prescribed point displacement and the overall pixel matching perspectives. The effectiveness of our method is validated by qualitative and quantitative experimental results on multimodal remote sensing image matching tasks. The code is available at: https://github.com/liliangzhi110/E2EIR.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available