4.7 Article

Multimodal Fusion Transformer for Remote Sensing Image Classification

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2023.3286826

关键词

Convolutional neural networks (CNNs); multihead cross-patch attention (mCrossPA); remote sensing (RS); vision transformer (ViT)

向作者/读者索取更多资源

Vision transformers (ViTs) have gained popularity in image classification tasks and researchers are now exploring their use in hyperspectral image (HSI) classification tasks. A new multimodal fusion transformer (MFT) network, incorporating a multihead cross-patch attention (mCrossPA), is introduced for HSI land-cover classification. The proposed model achieves superior performance by utilizing complementary information and tokenization.
Vision transformers (ViTs) have been trending in image classification tasks due to their promising performance when compared with convolutional neural networks (CNNs). As a result, many researchers have tried to incorporate ViTs in hyperspectral image (HSI) classification tasks. To achieve satisfactory performance, close to that of CNNs, transformers need fewer parameters. ViTs and other similar transformers use an external classification (CLS) token, which is randomly initialized and often fails to generalize well, whereas other sources of multimodal datasets, such as light detection and ranging (LiDAR), offer the potential to improve these models by means of a CLS. In this article, we introduce a new multimodal fusion transformer (MFT) network, which comprises a multihead cross-patch attention (mCrossPA) for HSI land-cover classification. Our mCrossPA utilizes other sources of complementary information in addition to the HSI in the transformer encoder to achieve better generalization. The concept of tokenization is used to generate CLS and HSI patch tokens, helping to learn a distinctive representation in a reduced and hierarchical feature space. Extensive experiments are carried out on widely used benchmark datasets, i.e., the University of Houston (UH), Trento, University of Southern Mississippi Gulfpark (MUUFL), and Augsburg. We compare the results of the proposed MFT model with other state-of-the-art transformers, classical CNNs, and conventional classifiers models. The superior performance achieved by the proposed model is due to the use of mCrossPA. The source code will be made available publicly at https://github.com/AnkurDeria/MFT.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据