期刊
IEEE TRANSACTIONS ON IMAGE PROCESSING
卷 31, 期 -, 页码 4251-4265出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2022.3177322
关键词
Hyperspectral image classification; convolutional neural network; rotation-invariant network; spectral-spatial feature extraction; attention mechanism
资金
- National Science Fund for Distinguished Young Scholars [61925112]
- Innovation Capability Support Program of Shaanxi [2020KJXX-091]
- Chinese Association for Artificial Intelligence (CAAI)-Huawei Mind Spore Open Fund
This paper proposes a rotation-invariant attention network (RIAN) for HSI classification, which extracts rotation-invariant spectral-spatial features using center spectral attention and rectified spatial attention modules. Experimental results show that RIAN performs well on HSIs with spatial rotation.
Hyperspectral image (HSI) classification refers to identifying land-cover categories of pixels based on spectral signatures and spatial information of HSIs. In recent deep learning-based methods, to explore the spatial information of HSIs, the HSI patch is usually cropped from original HSI as the input. And 3 x 3 convolution is utilized as a key component to capture spatial features for HSI classification. However, the 3 x 3 convolution is sensitive to the spatial rotation of inputs, which results in that recent methods perform worse in rotated HSIs. To alleviate this problem, a rotation-invariant attention network (RIAN) is proposed for HSI classification. First, a center spectral attention (CSpeA) module is designed to avoid the influence of other categories of pixels to suppress redundant spectral bands. Then, a rectified spatial attention (RSpaA) module is proposed to replace 3 x 3 convolution for extracting rotation-invariant spectral-spatial features from HSI patches. The CSpeA module, the 1 x 1 convolution and the RSpaA module are utilized to build the proposed RIAN for HSI classification. Experimental results demonstrate that RIAN is invariant to the spatial rotation of HSIs and has superior performance, e.g., achieving an overall accuracy of 86.53% (1.04% improvement) on the Houston database.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据