4.7 Article

Mining Joint Intraimage and Interimage Context for Remote Sensing Change Detection

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2023.3275819

Keywords

Feature extraction; Remote sensing; Memory modules; Training; Convolutional neural networks; Semantics; Vegetation mapping; Change detection; bitemporal remote sensing images; intraimage context; interimage context

Ask authors/readers for more resources

Recent deep learning methods for change detection focus on extracting more discriminative context within individual images. However, due to seasonal change and noise, the appearance of objects tends to vary among different scenes, leading to a lack of intraimage context representation and the presence of pseudo changes in detection results. To address this issue, a context aggregation network (CANet) is proposed to mine interimage context across training images for enhancing intraimage context. CANet outperforms state-of-the-art methods in three benchmark datasets, demonstrating its effectiveness in change detection.
Recent deep learning methods for change detection focus on excavating more discriminative context within individual images. However, due to seasonal change, noise, and so on, the appearance of objects tends to be more heterogeneous among various scenes. Consequently, the above intraimage context is inadequate to represent specific-category objects and pseudo changes would be inevitable in detection results. To deal with this issue, we propose a context aggregation network (CANet) to mine interimage context over all training images for further enhancing intraimage context. Specifically, a Siamese network attached with temporal attention modules is served as a feature encoder to extract multiscale temporal features from bitemporal images. Then, a context extraction module is devised to capture long-range spatial-channel context within individual images. Meanwhile, context representations of underlying categories in the scene are inferred using all training images in an unsupervised manner. Finally, these two kinds of contextual information are aggregated to one which is subsequently fed into a multiscale fusion module to produce the detection map. CANet is compared with several state-of-the-art methods on three benchmark datasets, including the season-varying change detection (SVCD) dataset, the Sun Yat-sen University change detection (SYSU-CD) dataset, and the Learning Vision and Remote Sensing Laboratory building change detection (LEVIR-CD) dataset. It is demonstrated that our method outperforms all comparison methods in terms of F1, overall accuracy (OA), and intersection of union (IoU). The results of CANet on three datasets are available at https://github.com/NuistZF/CANet-for-change-detection and codes will be made public soon.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available