4.7 Article

Multi-Difference Image Fusion Change Detection Using a Visual Attention Model on VHR Satellite Data

Journal

REMOTE SENSING
Volume 15, Issue 15, Pages -

Publisher

MDPI
DOI: 10.3390/rs15153799

Keywords

very high resolution (VHR); change detection; multi-difference image fusion; visual attention model; feature extraction

Ask authors/readers for more resources

A novel multi-difference image fusion change detection method based on a visual attention model (VA-MDCD) is proposed for very-high-resolution (VHR) remote sensing images. The method constructs difference images, calculates difference saliency images, fuses saliency images, and applies threshold segmentation to obtain the final change detection map. Experimental results show that the proposed method outperforms classical methods in terms of missed alarms and false alarms, demonstrating its strong robustness and generalization ability.
For very-high-resolution (VHR) remote sensing images with complex objects and rich textural information, multi-difference image fusion has been proven as an effective method to improve the performance of change detection. However, errors are superimposed during this process and a single spectral feature cannot fully utilize the correlation between pixels, resulting in low robustness. To overcome these problems and optimize the performance of multi-difference image fusion in change detection, we propose a novel multi-difference image fusion change detection method based on a visual attention model (VA-MDCD). First, we construct difference images using change vector analysis (CVA) and spectral gradient difference (SGD). Second, we use the visual attention model to calculate multiple color, intensity and orientation features of the difference images to obtain the difference saliency images. Third, we use the wavelet transform fusion algorithm to fuse two saliency images. Finally, we execute the OTSU threshold segmentation algorithm (OTSU) to obtain the final change detection map. To validate the effectiveness of VA-MDCD on VHR images, two datasets of Jilin 1 and Beijing 2 are selected for experiments. Compared with classical methods, the proposed method has a better performance with fewer missed alarms (MA) and false alarms (FA), which proves that the method has a strong robustness and generalization ability. The F-measure of the two datasets is 0.6671 and 0.7313, respectively. In addition, the results of ablation experiments confirm that the three feature extraction modules of the model all play a positive role.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available