4.3 Article

Hyperspectral image classification method based on M-3DCNN-Attention

Journal

JOURNAL OF APPLIED REMOTE SENSING
Volume 16, Issue 2, Pages -

Publisher

SPIE-SOC PHOTO-OPTICAL INSTRUMENTATION ENGINEERS
DOI: 10.1117/1.JRS.16.026507

Keywords

hyperspectral image classification; Mixup; three-dimensional convolutional neural network; attention module

Funding

  1. Fundamental Research Foundation for Universities of Heilongjiang Province [LGYC2018JC045]
  2. National Natural Science Foundation of China [61803128, 61671190]

Ask authors/readers for more resources

This study proposes an HSI classification method based on M-3DCNN-Attention, which improves classification accuracy through the construction of virtual samples and an enhanced network structure, outperforming comparative methods.
Hyperspectral image (HSI) classification methods based on three-dimensional convolutional neural network (3DCNN) have problems of overfitting the in-sample training process and difficulty in highlighting the role of discriminant features, which reduce the classification accuracy. To solve the above problems, an HSI classification method based on M-3DCNN-Attention is proposed. First, the Mixup algorithm is used to construct HSI virtual samples to expand the original data set. The sample size of the expanded data set is twice that of the original data set, which greatly alleviates the overfitting phenomenon caused by the small sample of HSI. Second, the structure of 3DCNN is improved. A convolutional block attention module (CBAM) is added between each 3D convolutional layer and ReLU layer, and a total of three CBAMs are used so as to highlight the discriminant features in spectral and spatial dimensions of HSI and suppress the nondiscriminant features. Finally, the spectral-spatial features are transferred to the Softmax classifier to obtain the final classification results. The comparative experiments are conducted on three hyperspectral data sets (Indian Pines, University of PaviaU, and Salinas), and the overall accuracy of M-3DCNN-Attention is 99.90%, 99.93%, and 99.36%, respectively, which is better than the comparative methods. (c) 2022 Society of Photo-Optical Instrumentation Engineers (SPIE)

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.3
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available