4.7 Article

Multi-Modal Remote Sensing Image Matching Considering Co-Occurrence Filter

Journal

IEEE TRANSACTIONS ON IMAGE PROCESSING
Volume 31, Issue -, Pages 2584-2597

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2022.3157450

Keywords

Feature extraction; Image matching; Image edge detection; Remote sensing; Matched filters; Nonlinear distortion; Image texture; Multi-modal remote sensing image; matching; co-occurrence filter; new image gradient; log-polar descriptor

Funding

  1. National Natural Science Foundation of China [42030102, 42192583, 42001406, 62102268]
  2. Fund for Innovative Research Groups of the Hubei Natural Science Foundation [2020CFA003]
  3. China Postdoctoral Science Foundation [2020M672416]

Ask authors/readers for more resources

Traditional image feature matching methods are not satisfactory for multi-modal remote sensing images due to nonlinear radiation distortion differences and complicated geometric distortion. This paper proposes a new robust MRSI matching method based on co-occurrence filter space matching, which optimizes the matching by constructing a new co-occurrence scale space, extracting feature points, and optimizing the distance function. Experimental results show that the proposed method significantly outperforms other state-of-the-art methods in terms of matching effectiveness.
Traditional image feature matching methods cannot obtain satisfactory results for multi-modal remote sensing images (MRSIs) in most cases because different imaging mechanisms bring significant nonlinear radiation distortion differences (NRD) and complicated geometric distortion. The key to MRSI matching is trying to weakening or eliminating the NRD and extract more edge features. This paper introduces a new robust MRSI matching method based on co-occurrence filter (CoF) space matching (CoFSM). Our algorithm has three steps: (1) a new co-occurrence scale space based on CoF is constructed, and the feature points in the new scale space are extracted by the optimized image gradient; (2) the gradient location and orientation histogram algorithm is used to construct a 152-dimensional log-polar descriptor, which makes the multi-modal image description more robust; and (3) a position-optimized Euclidean distance function is established, which is used to calculate the displacement error of the feature points in the horizontal and vertical directions to optimize the matching distance function. The optimization results then are rematched, and the outliers are eliminated using a fast sample consensus algorithm. We performed comparison experiments on our CoFSM method with the scale-invariant feature transform (SIFT), upright-SIFT, PSO-SIFT, and radiation-variation insensitive feature transform (RIFT) methods using a multi-modal image dataset. The algorithms of each method were comprehensively evaluated both qualitatively and quantitatively. Our experimental results show that our proposed CoFSM method can obtain satisfactory results both in the number of corresponding points and the accuracy of its root mean square error. The average number of obtained matches is namely 489.52 of CoFSM, and 412.52 of RIFT. As mentioned earlier, the matching effect of the proposed method was significantly greater than the three state-of-art methods. Our proposed CoFSM method achieved good effectiveness and robustness. Executable programs of CoFSM and MRSI datasets are published: https://skyearth.org/publication/project/CoFSM/

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available