4.7 Article

When CNNs Meet Vision Transformer: A Joint Framework for Remote Sensing Scene Classification

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LGRS.2021.3109061

关键词

Feature extraction; Semantics; Remote sensing; Training; Streaming media; Data models; Data mining; Convolutional neural network (CNN); high-resolution remote sensing (HRRS) images; joint loss function; scene classification; vision transformer

资金

  1. National Natural Science Foundation of China [42071302]
  2. Innovation Program for Chongqing Overseas Returnees [cx2019144]
  3. Fundamental Research Funds for the Central Universities [2020CDCGTM002]

向作者/读者索取更多资源

The study proposed a joint framework, CTNet, combining CNN and ViT to enhance the discriminative ability for HRRS scene classification. The method achieved high classification accuracy on AID and NWPU-RESISC45 datasets, demonstrating superior performance compared to other state-of-the-art methods.
Scene classification is an indispensable part of remote sensing image interpretation, and various convolutional neural network (CNN)-based methods have been explored to improve classification accuracy. Although they have shown good classification performance on high-resolution remote sensing (HRRS) images, discriminative ability of extracted features is still limited. In this letter, a high-performance joint framework combined CNNs and vision transformer (ViT) (CTNet) is proposed to further boost the discriminative ability of features for HRRS scene classification. The CTNet method contains two modules, including the stream of ViT (T-stream) and the stream of CNNs (C-stream). For the T-stream, flattened image patches are sent into pretrained ViT model to mine semantic features in HRRS images. To complement with T-stream, pretrained CNN is transferred to extract local structural features in the C-stream. Then, semantic features and structural features are concatenated to predict labels of unknown samples. Finally, a joint loss function is developed to optimize the joint model and increase the intraclass aggregation. The highest accuracies on the aerial image dataset (AID) and Northwestern Polytechnical University (NWPU)-RESISC45 datasets obtained by the CTNet method are 97.70% and 95.49%, respectively. The classification results reveal that the proposed method achieves high classification performance compared with other state-of-the-art (SOTA) methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据