Journal
IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
Volume 11, Issue 3, Pages 98-106Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/MGRS.2023.3281651
Keywords
Earth; Scene classification; Spaceborne radar; Source coding; Semantic segmentation; Self-supervised learning; Benchmark testing
Ask authors/readers for more resources
This article introduces an unlabeled dataset SSL4EO-S12 for self-supervised pretraining of Earth observation satellite imagery. The authors demonstrate the effectiveness of SSL4EO-S12 in representative methods and multiple applications, and compare it with existing datasets.
Self-supervised pretraining bears the potential to generate expressive representations from large-scale Earth observation (EO) data without human annotation. However, most existing pretraining in the field is based on ImageNet or medium-sized, labeled remote sensing (RS) datasets. In this article, we share an unlabeled dataset Self-Supervised Learning for Earth Observation-Sentinel-1/2 (SSL4EO-S12) to assemble a large-scale, global, multimodal, and multiseasonal corpus of satellite imagery. We demonstrate SSL4EO-S12 to succeed in self-supervised pretraining for a set of representative methods: momentum contrast (MoCo), self-distillation with no labels (DINO), masked autoencoders (MAE), and data2vec, and multiple downstream applications, including scene classification, semantic segmentation, and change detection. Our benchmark results prove the effectiveness of SSL4EO-S12 compared to existing datasets. The dataset, related source code, and pretrained models are available at https://github.com/zhu-xlab/SSL4EO-S12.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available