期刊
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
卷 61, 期 -, 页码 -出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2023.3320954
关键词
Feature extraction; Pansharpening; Transformers; Convolution; Convolutional neural networks; Computational modeling; Task analysis; Channel attention; feature integration; pansharpening; transformer
This research proposes a novel synergistic transformer and CNN method for pansharpening. It utilizes a parallel U-shaped feature extraction module to extract features from LRMS and PAN images, and then integrates the features using a feature fusion module to achieve high-quality pansharpening results.
Pansharpening is a process of fusing a high-resolution panchromatic (PAN) image with a low-resolution multispectral (LRMS) image to obtain a high-resolution multispectral (HRMS) image. Convolutional neural networks (CNNs) have been commonly utilized in this field because of their remarkable learning capabilities. However, their convolutional operators limit the long-range feature extraction ability of CNN. Meanwhile, the transformer models have exhibited strong capabilities in modeling long-range representations, but there are shortcomings in modeling local-range feature dependencies. To this end, we propose a novel synergistic transformer and CNN for pansharpening (STCP). First, a parallel U-shaped feature extraction module (PUFEM) is constructed for extracting the features of the LRMS and PAN images, which improves the feature representation ability for the two source images. In the PUFEM, combining the different feature learning capabilities of the CNN and transformer, we design a long- and short-range feature integration block (LSFIB) to extract the short-range features and long-range features at different scales in parallel. Then, a channel attention module (CAM)-based feature fusion module (CFFM) is constructed to integrate the features extracted by the PUFEM. Finally, the shallow features from the PAN image are reused to provide detailed features, which are integrated with the fused features from the CFFM to achieve the final pansharpened results. Numerous experiments show that our STCP outperforms some state-of-the-art (SOTA) approaches both subjectively and objectively.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据