4.7 Article

Masked Auto-Encoding Spectral-Spatial Transformer for Hyperspectral Image Classification

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2022.3217892

Keywords

Hyperspectral (HS) imaging; mask auto-encoders (MAEs); Vision Transformers (ViTs)

Funding

  1. Ministerio de Ciencia e Innovacion [PID2021-128794OB-I00]

Ask authors/readers for more resources

This article presents a novel masked auto-encoding spectral-spatial transformer (MAEST) model, which combines two collaborative branches to classify and reconstruct hyperspectral remote sensing images, addressing the noise issue in conventional transformer networks.
Deep learning has certainly become the dominant trend in hyperspectral (HS) remote sensing (RS) image classification owing to its excellent capabilities to extract highly discriminating spectral-spatial features. In this context, transformer networks have recently shown prominent results in distinguishing even the most subtle spectral differences because of their potential to characterize sequential spectral data. Nonetheless, many complexities affecting HS remote sensing data (e.g., atmospheric effects, thermal noise, quantization noise) may severely undermine such potential since no mode of relieving noisy feature patterns has still been developed within transformer networks. To address the problem, this article presents a novel masked auto-encoding spectral-spatial transformer (MAEST), which gathers two different collaborative branches: 1) a reconstruction path, which dynamically uncovers the most robust encoding features based on a masking auto-encoding strategy, and 2) a classification path, which embeds these features onto a transformer network to classify the data focusing on the features that better reconstruct the input. Unlike other existing models, this novel design pursues to learn refined transformer features considering the aforementioned complexities of the HS remote sensing image domain. The experimental comparison, including several state-of-the-art methods and benchmark datasets, shows the superior results obtained by MAEST. The codes of this article will be available at https://github.com/ibanezfd/MAEST.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available