4.7 Article

Multi-Difference Image Fusion Change Detection Using a Visual Attention Model on VHR Satellite Data

期刊

REMOTE SENSING
卷 15, 期 15, 页码 -

出版社

MDPI
DOI: 10.3390/rs15153799

关键词

very high resolution (VHR); change detection; multi-difference image fusion; visual attention model; feature extraction

向作者/读者索取更多资源

A novel multi-difference image fusion change detection method based on a visual attention model (VA-MDCD) is proposed for very-high-resolution (VHR) remote sensing images. The method constructs difference images, calculates difference saliency images, fuses saliency images, and applies threshold segmentation to obtain the final change detection map. Experimental results show that the proposed method outperforms classical methods in terms of missed alarms and false alarms, demonstrating its strong robustness and generalization ability.
For very-high-resolution (VHR) remote sensing images with complex objects and rich textural information, multi-difference image fusion has been proven as an effective method to improve the performance of change detection. However, errors are superimposed during this process and a single spectral feature cannot fully utilize the correlation between pixels, resulting in low robustness. To overcome these problems and optimize the performance of multi-difference image fusion in change detection, we propose a novel multi-difference image fusion change detection method based on a visual attention model (VA-MDCD). First, we construct difference images using change vector analysis (CVA) and spectral gradient difference (SGD). Second, we use the visual attention model to calculate multiple color, intensity and orientation features of the difference images to obtain the difference saliency images. Third, we use the wavelet transform fusion algorithm to fuse two saliency images. Finally, we execute the OTSU threshold segmentation algorithm (OTSU) to obtain the final change detection map. To validate the effectiveness of VA-MDCD on VHR images, two datasets of Jilin 1 and Beijing 2 are selected for experiments. Compared with classical methods, the proposed method has a better performance with fewer missed alarms (MA) and false alarms (FA), which proves that the method has a strong robustness and generalization ability. The F-measure of the two datasets is 0.6671 and 0.7313, respectively. In addition, the results of ablation experiments confirm that the three feature extraction modules of the model all play a positive role.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据