4.7 Article

Extended Vision Transformer (ExViT) for Land Use and Land Cover Classification: A Multimodal Deep Learning Framework

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2023.3284671

关键词

Convolutional neural network (CNN); deep learning; hyperspectral; land use and land cover (LULC); light detection and ranging (LiDAR); multimodal image classification; synthetic aperture radar (SAR); vision transformer (ViT)

向作者/读者索取更多资源

This paper proposes a novel multimodal deep learning framework by extending the conventional ViT for the task of land use and land cover classification. The proposed framework outperforms other transformer or CNN-based models on two multimodal remote sensing benchmark datasets.
The recent success of attention mechanism-driven deep models, like vision transformer (ViT) as one of the most representatives, has intrigued a wave of advanced research to explore their adaptation to broader domains. However, current transformer-based approaches in the remote sensing (RS) community pay more attention to single-modality data, which might lose expandability in making full use of the ever-growing multimodal Earth observation data. To this end, we propose a novel multimodal deep learning framework by extending conventional ViT with minimal modifications, aiming at the task of land use and land cover (LULC) classification. Unlike common stems that adopt either linear patch projection or deep regional embedder, our approach processes multimodal RS image patches with parallel branches of position-shared ViTs extended with separable convolution modules, which offers an economical solution to leverage both spatial and modality-specific channel information. Furthermore, to promote information exchange across heterogeneous modalities, their tokenized embeddings are then fused through a cross-modality attention (CMA) module by exploiting pixel-level spatial correlation in RS scenes. Both of these modifications significantly improve the discriminative ability of classification tokens in each modality and thus further performance increase can be finally attained by a full token-based decision-level fusion module. We conduct extensive experiments on two multimodal RS benchmark datasets, i.e., the Houston2013 dataset containing hyperspectral (HS) and light detection and ranging (LiDAR) data, and Berlin dataset with HS and synthetic aperture radar (SAR) data, to demonstrate that our extended vision transformer (ExViT) outperforms concurrent competitors based on transformer or convolutional neural network (CNN) backbones, in addition to several competitive machine-learning-based models. The source codes and investigated datasets of this work will be made publicly available at https://github.com/jingyao16/ExViT.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据