4.7 Article

An Empirical Study of Remote Sensing Pretraining

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2022.3176603

关键词

Classification; convolutional neural network (CNN); detection; remote sensing (RS) pretraining (RSP); semantic segmentation; vision transformer

向作者/读者索取更多资源

Deep learning has achieved great success in aerial image understanding for remote sensing research. However, most existing models are pretrained with ImageNet weights which hinder their fine-tuning performance on downstream aerial scene tasks due to domain gaps. This study empirically investigates RS pretraining on aerial images, training different networks from scratch using the MillionAID dataset to obtain pretrained backbones. Results show that RS pretraining enhances performance in scene recognition and RS-related semantics tasks, but task discrepancies still exist, highlighting the need for further research on large-scale pretraining datasets and effective methods.
Deep learning has largely reshaped remote sensing (RS) research for aerial image understanding and made a great success. Nevertheless, most of the existing deep models are initialized with the ImageNet pretrained weights since natural images inevitably present a large domain gap relative to aerial images, probably limiting the fine-tuning performance on downstream aerial scene tasks. This issue motivates us to conduct an empirical study of RS pretraining (RSP) on aerial images. To this end, we train different networks from scratch with the help of the largest RS scene recognition dataset up to now-MillionAID-to obtain a series of RS pretrained backbones, including both convolutional neural networks (CNNs) and vision transformers, such as Swin and ViTAE, which have shown promising performance on computer vision tasks. Then, we investigate the impact of RSP on representative downstream tasks, including scene recognition, semantic segmentation, object detection, and change detection using these CNN and vision transformer backbones. Empirical study shows that RSP can help deliver distinctive performances in scene recognition tasks and in perceiving RS-related semantics, such as Bridge and Airplane. We also find that, although RSP mitigates the data discrepancies of traditional ImageNet pretraining on RS images, it may still suffer from task discrepancies, where downstream tasks require different representations from scene recognition tasks. These findings call for further research efforts on both large-scale pretraining datasets and effective pretraining methods. The codes and pretrained models will be released at https://github.com/ViTAETransformer/ViTAE-Transformer-Remote-Sensing.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据