4.7 Article

Subpixel Multilevel Scale Feature Learning and Adaptive Attention Constraint Fusion for Hyperspectral Image Classification

Journal

REMOTE SENSING
Volume 14, Issue 15, Pages -

Publisher

MDPI
DOI: 10.3390/rs14153670

Keywords

HSI classification; convolutional neural network (CNN); multiscale features; subpixel; adaptive attention fusion; feature enhancement

Funding

  1. National Natural Science Foundation of China [61801222]
  2. Nature Science Foundation of Jiangsu Province [BK20191284]
  3. Start Foundation of Nanjing University of Posts and Telecommunications (NUPTSF) [NY220157]
  4. Natural Science Research Project of Colleges and Universities of Jiangsu Province [22KJB510037]

Ask authors/readers for more resources

This paper investigates the issues of multiscale information and feature fusion in hyperspectral image classification and proposes an adaptive attention constraint fusion module and a semantic feature enhancement module. Experimental results demonstrate that the proposed method outperforms other state-of-the-art methods.
Convolutional neural networks (CNNs) play an important role in hyperspectral image (HSI) classification due to their powerful feature extraction ability. Multiscale information is an important means of enhancing the feature representation ability. However, current HSI classification models based on deep learning only use fixed patches as the network input, which may not well reflect the complexity and richness of HSIs. While the existing methods achieve good classification performance for large-scale scenes, the classification of boundary locations and small-scale scenes is still challenging. In addition, dimensional dislocation often exists in the feature fusion process, and the up/downsampling operation for feature alignment may introduce extra noise or result in feature loss. Aiming at the above issues, this paper deeply explores multiscale features, proposes an adaptive attention constraint fusion module for different scale features, and designs a semantic feature enhancement module for high-dimensional features. First, HSI data of two different spatial scales are fed into the model. For the two inputs, we upsample them using bilinear interpolation to obtain their subpixel data. The proposed multiscale feature extraction module is intended to extract the features of the above four parts of the data. For the extracted features, the multiscale attention fusion module is used for feature fusion, and then, the fused features are fed into the high-level feature semantic enhancement module. Finally, based on the fully connected layer and softmax layer, the prediction results of the proposed model are obtained. Experimental results on four public HSI databases verify that the proposed method outperforms several state-of-the-art methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available