4.7 Article

Structural Regression Fusion for Unsupervised Multimodal Change Detection

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2023.3294884

Keywords

Change detection (CD); fusion; image regression; multimodal data; structural asymmetry

Ask authors/readers for more resources

Multimodal change detection (MCD) is a challenging topic in remote sensing due to the unavailability of directly comparing multimodal images. This article proposes a structural regression fusion (SRF)-based method to reduce the influence of structural asymmetry and improve image transformation performance in MCD. SRF incorporates fusion into the regression process and yields three types of constraints to perform the fused image transformation. The proposed SRF is verified on six real datasets and outperforms some state-of-the-art methods.
Multimodal change detection (MCD) is an increasingly interesting but very challenging topic in remote sensing, which is due to the unavailability of detecting changes by directly comparing multimodal images from different domains. In this article, we first analyze the structural asymmetry between multitemporal images and show their negative impact on the previous MCD methods using image structures. Specifically, when there is a structural asymmetry, previous structure-based methods can only complete a structure comparison or image regression in one direction and fail in the other direction; that is, they cannot transform or convert from complex structural images (with more categories) to simple structural images (with fewer categories). To reduce the influence of structural asymmetry, we propose a structural regression fusion (SRF)-based method that simultaneously transforms the pre-event and post-event images into the image domain of each other, calculating the forward and backward changed images, respectively. Noteworthy, different from previous late fusion methods that fuse the forward and backward changed images in the postprocessing stage, SRF incorporates fusion into the regression process, which can fully explore the connection between changed images and, thus, improve image transformation performance and obtain better changed images. Specifically, SRF yields three types of constraints to perform the fused image transformation: structure consistency-based regression term, change smoothness and alignment-based fusion term, and prior sparsity-based penalty term. Finally, the changes can be extracted by comparing the transformed and original images. The proposed SRF is verified on six real datasets by comparing with some state-of-the-art (SOTA) methods. Source code of the proposed method will be made available at https://github.com/yulisun/SRF.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available