4.7 Article

Vision Transformers for Remote Sensing Image Classification

Journal

REMOTE SENSING
Volume 13, Issue 3, Pages -

Publisher

MDPI
DOI: 10.3390/rs13030516

Keywords

remote sensing; image level classification; vision transformers; multihead attention; data augmentation

Funding

  1. King Saud University, Riyadh, Saudi Arabia [RSP-2020/69]

Ask authors/readers for more resources

This paper proposes a remote-sensing scene-classification method based on vision transformers, which utilize multihead attention mechanisms to establish long-range contextual relationships between pixels in images. The approach involves dividing images into patches, converting them into sequences, and applying data augmentation techniques for improved classification performance. The study also demonstrates the efficacy of compressing the network by pruning half of the layers while maintaining competitive classification accuracies.
In this paper, we propose a remote-sensing scene-classification method based on vision transformers. These types of networks, which are now recognized as state-of-the-art models in natural language processing, do not rely on convolution layers as in standard convolutional neural networks (CNNs). Instead, they use multihead attention mechanisms as the main building block to derive long-range contextual relation between pixels in images. In a first step, the images under analysis are divided into patches, then converted to sequence by flattening and embedding. To keep information about the position, embedding position is added to these patches. Then, the resulting sequence is fed to several multihead attention layers for generating the final representation. At the classification stage, the first token sequence is fed to a softmax classification layer. To boost the classification performance, we explore several data augmentation strategies to generate additional data for training. Moreover, we show experimentally that we can compress the network by pruning half of the layers while keeping competing classification accuracies. Experimental results conducted on different remote-sensing image datasets demonstrate the promising capability of the model compared to state-of-the-art methods. Specifically, Vision Transformer obtains an average classification accuracy of 98.49%, 95.86%, 95.56% and 93.83% on Merced, AID, Optimal31 and NWPU datasets, respectively. While the compressed version obtained by removing half of the multihead attention layers yields 97.90%, 94.27%, 95.30% and 93.05%, respectively.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available