4.7 Article

An Empirical Study of Remote Sensing Pretraining

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2022.3176603

Keywords

Classification; convolutional neural network (CNN); detection; remote sensing (RS) pretraining (RSP); semantic segmentation; vision transformer

Ask authors/readers for more resources

Deep learning has achieved great success in aerial image understanding for remote sensing research. However, most existing models are pretrained with ImageNet weights which hinder their fine-tuning performance on downstream aerial scene tasks due to domain gaps. This study empirically investigates RS pretraining on aerial images, training different networks from scratch using the MillionAID dataset to obtain pretrained backbones. Results show that RS pretraining enhances performance in scene recognition and RS-related semantics tasks, but task discrepancies still exist, highlighting the need for further research on large-scale pretraining datasets and effective methods.
Deep learning has largely reshaped remote sensing (RS) research for aerial image understanding and made a great success. Nevertheless, most of the existing deep models are initialized with the ImageNet pretrained weights since natural images inevitably present a large domain gap relative to aerial images, probably limiting the fine-tuning performance on downstream aerial scene tasks. This issue motivates us to conduct an empirical study of RS pretraining (RSP) on aerial images. To this end, we train different networks from scratch with the help of the largest RS scene recognition dataset up to now-MillionAID-to obtain a series of RS pretrained backbones, including both convolutional neural networks (CNNs) and vision transformers, such as Swin and ViTAE, which have shown promising performance on computer vision tasks. Then, we investigate the impact of RSP on representative downstream tasks, including scene recognition, semantic segmentation, object detection, and change detection using these CNN and vision transformer backbones. Empirical study shows that RSP can help deliver distinctive performances in scene recognition tasks and in perceiving RS-related semantics, such as Bridge and Airplane. We also find that, although RSP mitigates the data discrepancies of traditional ImageNet pretraining on RS images, it may still suffer from task discrepancies, where downstream tasks require different representations from scene recognition tasks. These findings call for further research efforts on both large-scale pretraining datasets and effective pretraining methods. The codes and pretrained models will be released at https://github.com/ViTAETransformer/ViTAE-Transformer-Remote-Sensing.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available