4.7 Article

Multi-Modal Remote Sensing Image Matching Considering Co-Occurrence Filter

期刊

IEEE TRANSACTIONS ON IMAGE PROCESSING
卷 31, 期 -, 页码 2584-2597

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2022.3157450

关键词

Feature extraction; Image matching; Image edge detection; Remote sensing; Matched filters; Nonlinear distortion; Image texture; Multi-modal remote sensing image; matching; co-occurrence filter; new image gradient; log-polar descriptor

资金

  1. National Natural Science Foundation of China [42030102, 42192583, 42001406, 62102268]
  2. Fund for Innovative Research Groups of the Hubei Natural Science Foundation [2020CFA003]
  3. China Postdoctoral Science Foundation [2020M672416]

向作者/读者索取更多资源

Traditional image feature matching methods are not satisfactory for multi-modal remote sensing images due to nonlinear radiation distortion differences and complicated geometric distortion. This paper proposes a new robust MRSI matching method based on co-occurrence filter space matching, which optimizes the matching by constructing a new co-occurrence scale space, extracting feature points, and optimizing the distance function. Experimental results show that the proposed method significantly outperforms other state-of-the-art methods in terms of matching effectiveness.
Traditional image feature matching methods cannot obtain satisfactory results for multi-modal remote sensing images (MRSIs) in most cases because different imaging mechanisms bring significant nonlinear radiation distortion differences (NRD) and complicated geometric distortion. The key to MRSI matching is trying to weakening or eliminating the NRD and extract more edge features. This paper introduces a new robust MRSI matching method based on co-occurrence filter (CoF) space matching (CoFSM). Our algorithm has three steps: (1) a new co-occurrence scale space based on CoF is constructed, and the feature points in the new scale space are extracted by the optimized image gradient; (2) the gradient location and orientation histogram algorithm is used to construct a 152-dimensional log-polar descriptor, which makes the multi-modal image description more robust; and (3) a position-optimized Euclidean distance function is established, which is used to calculate the displacement error of the feature points in the horizontal and vertical directions to optimize the matching distance function. The optimization results then are rematched, and the outliers are eliminated using a fast sample consensus algorithm. We performed comparison experiments on our CoFSM method with the scale-invariant feature transform (SIFT), upright-SIFT, PSO-SIFT, and radiation-variation insensitive feature transform (RIFT) methods using a multi-modal image dataset. The algorithms of each method were comprehensively evaluated both qualitatively and quantitatively. Our experimental results show that our proposed CoFSM method can obtain satisfactory results both in the number of corresponding points and the accuracy of its root mean square error. The average number of obtained matches is namely 489.52 of CoFSM, and 412.52 of RIFT. As mentioned earlier, the matching effect of the proposed method was significantly greater than the three state-of-art methods. Our proposed CoFSM method achieved good effectiveness and robustness. Executable programs of CoFSM and MRSI datasets are published: https://skyearth.org/publication/project/CoFSM/

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据