4.7 Article

Vision Transformers for Remote Sensing Image Classification

期刊

REMOTE SENSING
卷 13, 期 3, 页码 -

出版社

MDPI
DOI: 10.3390/rs13030516

关键词

remote sensing; image level classification; vision transformers; multihead attention; data augmentation

资金

  1. King Saud University, Riyadh, Saudi Arabia [RSP-2020/69]

向作者/读者索取更多资源

This paper proposes a remote-sensing scene-classification method based on vision transformers, which utilize multihead attention mechanisms to establish long-range contextual relationships between pixels in images. The approach involves dividing images into patches, converting them into sequences, and applying data augmentation techniques for improved classification performance. The study also demonstrates the efficacy of compressing the network by pruning half of the layers while maintaining competitive classification accuracies.
In this paper, we propose a remote-sensing scene-classification method based on vision transformers. These types of networks, which are now recognized as state-of-the-art models in natural language processing, do not rely on convolution layers as in standard convolutional neural networks (CNNs). Instead, they use multihead attention mechanisms as the main building block to derive long-range contextual relation between pixels in images. In a first step, the images under analysis are divided into patches, then converted to sequence by flattening and embedding. To keep information about the position, embedding position is added to these patches. Then, the resulting sequence is fed to several multihead attention layers for generating the final representation. At the classification stage, the first token sequence is fed to a softmax classification layer. To boost the classification performance, we explore several data augmentation strategies to generate additional data for training. Moreover, we show experimentally that we can compress the network by pruning half of the layers while keeping competing classification accuracies. Experimental results conducted on different remote-sensing image datasets demonstrate the promising capability of the model compared to state-of-the-art methods. Specifically, Vision Transformer obtains an average classification accuracy of 98.49%, 95.86%, 95.56% and 93.83% on Merced, AID, Optimal31 and NWPU datasets, respectively. While the compressed version obtained by removing half of the multihead attention layers yields 97.90%, 94.27%, 95.30% and 93.05%, respectively.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据