3.8 Proceedings Paper

MST plus plus : Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction

出版社

IEEE
DOI: 10.1109/CVPRW56347.2022.00090

关键词

-

资金

  1. NSFC fund [61831014]
  2. Shenzhen Science and Technology Project [ZDYBH201900000002, CJGJZD20200617102601004]
  3. Westlake Foundation [2021B1501-2]
  4. NSF [IIS-2124179]
  5. Google Cloud

向作者/读者索取更多资源

This paper proposes a Transformer-based method, Multi-stage Spectral-wise Transformer (MST++), for efficient spectral reconstruction. By utilizing Spectral-wise Multi-head Self-attention (S-MSA) and Spectral-wise Attention Block (SAB), the proposed method extracts multi-resolution contextual information to improve the reconstruction quality progressively. Experimental results demonstrate the superior performance of MST++ compared to other state-of-the-art methods.
Existing leading methods for spectral reconstruction (SR) focus on designing deeper or wider convolutional neural networks (CNNs) to learn the end-to-end mapping from the RGB image to its hyperspectral image (HSI). These CNN-based methods achieve impressive restoration performance while showing limitations in capturing the lon-grange dependencies and self-similarity prior. To cope with this problem, we propose a novel Transformer-based method, Multi-stage Spectral-wise Transformer (MST++), for efficient spectral reconstruction. In particular, we employ Spectral-wise Multi-head Self-attention (S-MSA) that is based on the HSI spatially sparse while spectrally self-similar nature to compose the basic unit, Spectral-wise Attention Block (SAB). Then SABs build up Single-stage Spectral-wise Transformer (SST) that exploits a U-shaped structure to extract multi-resolution contextual information. Finally, our MST++, cascaded by several SSTs, progressively improves the reconstruction quality from coarse to fine. Comprehensive experiments show that our MST++ significantly outperforms other state-of-the-art methods. In the NTIRE 2022 Spectral Reconstruction Challenge, our approach won the First place. Code and pre-trained models are publicly available at https://github.com/caiyuanhao1998/MST-plus-plus.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据