4.7 Article

SpectralFormer: Rethinking Hyperspectral Image Classification With Transformers

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2021.3130716

关键词

Transformers; Feature extraction; Task analysis; Data mining; Hyperspectral imaging; Encoding; Convolutional neural networks; Convolutional neural networks; deep learning; hyperspectral (HS) image classification; local contextual information; remote sensing; sequence data; skip fusion; transformer

资金

  1. National Key R&D Program of China [2021YFB3900502]
  2. National Natural Science Foundation of China [42030111]
  3. MIAI@Grenoble Alpes [ANR-19-P3IA-0003]
  4. AXA Research Fund
  5. Institute for Information & Communication Technology Planning & Evaluation (IITP), Republic of Korea [2020-0-01819-002] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)

向作者/读者索取更多资源

The article introduces a novel HS image classification network called SpectralFormer, which utilizes the transformers framework to learn spectral sequence information, achieving better classification performance than traditional methods and exhibiting high flexibility.
Hyperspectral (HS) images are characterized by approximately contiguous spectral information, enabling the fine identification of materials by capturing subtle spectral discrepancies. Due to their excellent locally contextual modeling ability, convolutional neural networks (CNNs) have been proven to be a powerful feature extractor in HS image classification. However, CNNs fail to mine and represent the sequence attributes of spectral signatures well due to the limitations of their inherent network backbone. To solve this issue, we rethink HS image classification from a sequential perspective with transformers and propose a novel backbone network called SpectralFormer. Beyond bandwise representations in classic transformers, SpectralFormer is capable of learning spectrally local sequence information from neighboring bands of HS images, yielding groupwise spectral embeddings. More significantly, to reduce the possibility of losing valuable information in the layerwise propagation process, we devise a cross-layer skip connection to convey memory-like components from shallow to deep layers by adaptively learning to fuse soft residuals across layers. It is worth noting that the proposed SpectralFormer is a highly flexible backbone network, which can be applicable to both pixelwise and patchwise inputs. We evaluate the classification performance of the proposed SpectralFormer on three HS datasets by conducting extensive experiments, showing the superiority over classic transformers and achieving a significant improvement in comparison with state-of-the-art backbone networks. The codes of this work will be available at https://github.com/danfenghong/IEEE_TGRS_SpectralFormer for the sake of reproducibility.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据