4.7 Article

Self-Supervised SAR-Optical Data Fusion of Sentinel-1/-2 Images

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2021.3128072

Keywords

Task analysis; Synthetic aperture radar; Optical imaging; Optical sensors; Deep learning; Training; Fuses; Data fusion; land-cover mapping; pixel level; remote sensing; self-supervised learning; Sentinel-1; -2

Funding

  1. China Scholarship Council

Ask authors/readers for more resources

The study introduces a self-supervised framework for SAR-optical data fusion and land-cover mapping tasks, achieving comparable accuracy with image-level contrastive learning by fusing SAR and optical images through multi-view contrastive loss, and combining pretrained features and spectral information to assign land-cover classes to each pixel.
The effective combination of the complementary information provided by huge amount of unlabeled multisensor data (e.g., synthetic aperture radar (SAR) and optical images) is a critical issue in remote sensing. Recently, contrastive learning methods have reached remarkable success in obtaining meaningful feature representations from multiview data. However, these methods only focus on image-level features, which may not satisfy the requirement for dense prediction tasks such as land-cover mapping. In this work, we propose a self-supervised framework for SAR-optical data fusion and land-cover mapping tasks. SAR and optical images are fused by using a multiview contrastive loss at image level and super-pixel level according to one of those possible strategies: in the early, intermediate, and late strategies. For the land-cover mapping task, we assign each pixel a land-cover class by the joint use of pretrained features and spectral information of the image itself. Experimental results show that the proposed approach not only achieves a comparable accuracy but also reduces the dimension of features with respect to the image-level contrastive learning method. Among three fusion strategies, the intermediate fusion strategy achieves the best performance. The combination of the pixel-level fusion approach and the self-training on spectral indices leads to further improvements in the land-cover mapping task with respect to the image-level fusion approach, especially with sparse pseudo labels. The code to reproduce our results will be found at https://github.com/yusin2it/SARoptical_fusion.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available