4.7 Article

Automatic Matching of Multimodal Remote Sensing Images via Learned Unstructured Road Feature

Journal

REMOTE SENSING
Volume 14, Issue 18, Pages -

Publisher

MDPI
DOI: 10.3390/rs14184595

Keywords

multimodal image matching; semantic road features; local binary entropy descriptor; feature matching

Funding

  1. National Nature Science Foundation of China [62102436]
  2. National Key Laboratory of Science and Technology [6142217210503]
  3. Projects Foundation of University [202250E050, 202250E060]
  4. Hubei Province Natural Science Foundation [2021CFB279]

Ask authors/readers for more resources

This paper proposes a novel automatic matching method named LURF for multimodal images by extracting semantic road features, designing an intersection point detector, and a local entropy descriptor, and adopting a global optimization strategy for correct matching.
Automatic matching of multimodal remote sensing images remains a vital yet challenging task, particularly for remote sensing and computer vision applications. Most traditional methods mainly focus on key point detection and description of the original image, thus ignoring the deep semantic feature information such as semantic road features, with the result that the traditional method can not effectively resist nonlinear grayscale distortion, and has low matching efficiency and poor accuracy. Motivated by this, this paper proposes a novel automatic matching method named LURF via learned unstructured road features for the multimodal images. There are four main contributions in LURF. To begin with, the semantic road features were extracted from multimodal images based on segmentation model CRESIv2. Next, based on semantic road features, a stable and reliable intersection point detector has been proposed to detect unstructured key points. Moreover, a local entropy descriptor has been designed to describe key points with the local skeleton feature. Finally, a global optimization strategy is adopted to achieve the correct matching. The extensive experimental results demonstrate that the proposed LURF outperforms other state-of-the-art methods in terms of both accuracy and efficiency on different multimodal image data sets.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available