4.7 Article

Self-Supervised Multisensor Change Detection

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2021.3109957

Keywords

Optical sensors; Optical imaging; Training; Earth; Synthetic aperture radar; Deep learning; Spatial resolution; Change detection (CD); deep learning; multisensor analysis; self-supervised learning

Funding

  1. European Research Council (ERC) under the European Union [ERC-2016-StG714087]
  2. Helmholtz Association through the Framework of Helmholtz Artificial Intelligence (AI)-Local Unit Munich Unit at Aeronautics, Space and Transport (MASTr) [ZT-I-PF-5-01]
  3. Helmholtz Excellent Professorship Data Science in Earth Observation-Big Data Fusion for Urban Research [W2-W3-100]
  4. German Federal Ministry of Education and Research (BMBF) [01DD20001]

Ask authors/readers for more resources

This study proposes a multi-sensor change detection method using only unlabeled target bitemporal images for self-supervised network training, employing deep clustering and contrastive learning. Evaluation on four multimodal scenes demonstrates the advantages of our self-supervised approach.
Most change detection (CD) methods assume that prechange and postchange images are acquired by the same sensor. However, in many real-life scenarios, e.g., natural disasters, it is more practical to use the latest available images before and after the occurrence of incidence, which may be acquired using different sensors. In particular, we are interested in the combination of the images acquired by optical and synthetic aperture radar (SAR) sensors. SAR images appear vastly different from the optical images even when capturing the same scene. Adding to this, CD methods are often constrained to use only target image-pair, no labeled data, and no additional unlabeled data. Such constraints limit the scope of traditional supervised machine learning and unsupervised generative approaches for multisensor CD. The recent rapid development of self-supervised learning methods has shown that some of them can even work with only few images. Motivated by this, in this work, we propose a method for multisensor CD using only the unlabeled target bitemporal images that are used for training a network in a self-supervised fashion by using deep clustering and contrastive learning. The proposed method is evaluated on four multimodal bitemporal scenes showing change, and the benefits of our self-supervised approach are demonstrated. Code is available at https://gitlab.lrz.de/ai4eo/cd/-/tree/main/sarOpticalMultisensorTgrs2021.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available