4.7 Article

Multiscale spatial-spectral transformer network for hyperspectral and multispectral image fusion

期刊

INFORMATION FUSION
卷 96, 期 -, 页码 117-129

出版社

ELSEVIER
DOI: 10.1016/j.inffus.2023.03.011

关键词

Hyperspectral image (HSI); Multispectral image (MSI); Transformer; Pre-training; Spectral multi-head self-attention; Image fusion

向作者/读者索取更多资源

In this paper, a Multiscale Spatial-spectral Transformer Network (MSST-Net) is proposed to extract spectral and spatial features from HSI and MSI using the self-attention mechanism of the Transformer. A self-supervised pre-training strategy is also introduced to improve the network's performance. Experimental results demonstrate that the proposed network achieves better performance compared to other state-of-the-art fusion methods.
Fusing hyperspectral images (HSIs) and multispectral images (MSIs) is an economic and feasible way to obtain images with both high spectral resolution and spatial resolution. Due to the limited receptive field of convolution kernels, fusion methods based on convolutional neural networks (CNNs) fail to take advantage of the global relationship in a feature map. In this paper, to exploit the powerful capability of Transformer to extract global information from the whole feature map for fusion, we propose a novel Multiscale Spatial- spectral Transformer Network (MSST-Net). The proposed network is a two-branch network that integrates the self-attention mechanism of the Transformer to extract spectral features from HSI and spatial features from MSI, respectively. Before feature extraction, cross-modality concatenations are performed to achieve cross -modality information interaction between the two branches. Then, we propose a spectral Transformer (SpeT) to extract spectral features and introduce multiscale band/patch embeddings to obtain multiscale features through SpeTs and spatial Transformers (SpaTs). To further improve the network's performance and generalization, we proposed a self-supervised pre-training strategy, in which a masked bands autoencoder (MBAE) and a masked patches autoencoder (MPAE) are specially designed for self-supervised pre-training of the SpeTs and SpaTs. Extensive experiments on simulated and real datasets illustrate that the proposed network can achieve better performance when compared to other state-of-the-art fusion methods. The code of MSST-Net will be available at http://www.jiasen.tech/papers/ for the sake of reproducibility.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据