4.7 Article

Advancing Plain Vision Transformer Toward Remote Sensing Foundation Model

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2022.3222818

Keywords

Task analysis; Transformers; Computational modeling; Feature extraction; Remote sensing; Adaptation models; Visualization; Object detection; remote sensing (RS); scene classification; semantic segmentation; vision transformer (ViT)

Ask authors/readers for more resources

Large-scale vision models tailored to remote sensing tasks are proposed in this article, using Vision Transformers and a new rotated varied-size window attention mechanism. The experiments demonstrate the superior performance of the model in detection, classification, and segmentation tasks, as well as its advantages in terms of computational complexity and data efficiency.
Large-scale vision foundation models have made significant progress in visual tasks on natural images, with vision transformers (ViTs) being the primary choice due to their good scalability and representation ability. However, large-scale models in remote sensing (RS) have not yet been sufficiently explored. In this article, we resort to plain ViTs with about 100 million parameters and make the first attempt to propose large vision models tailored to RS tasks and investigate how such large models perform. To handle the large sizes and objects of arbitrary orientations in RS images, we propose a new rotated varied-size window attention to replace the original full attention in transformers, which can significantly reduce the computational cost and memory footprint while learning better object representation by extracting rich context from the generated diverse windows. Experiments on detection tasks show the superiority of our model over all state-of-the-art models, achieving 81.24% mean average precision (mAP) on the DOTA-V1.0 dataset. The results of our models on downstream classification and segmentation tasks also show competitive performance compared to existing advanced methods. Further experiments show the advantages of our models in terms of computational complexity and data efficiency in transferring. The code and models will be released at https://github.com/ViTAE-Transformer/Remote-Sensing-RVSA.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available