4.7 Article

Self-Attention Context Network: Addressing the Threat of Adversarial Attacks for Hyperspectral Image Classification

期刊

IEEE TRANSACTIONS ON IMAGE PROCESSING
卷 30, 期 -, 页码 8671-8685

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2021.3118977

关键词

Deep learning; Training; Hyperspectral imaging; Feature extraction; Task analysis; Perturbation methods; Predictive models; Hyperspectral image (HSI) classification; adversarial example; adversarial attack; adversarial defense; convolutional neural network (CNN); deep learning

资金

  1. National Natural Science Foundation of China [61822113, 41820104006, 61871299, 41871243]
  2. Science and Technology Major Project of Hubei Province (Next-Generation AI Technologies) [2019AEA170]

向作者/读者索取更多资源

Deep learning models have shown great capability for hyperspectral image classification but are vulnerable to adversarial attacks. The study proposes a novel SACNet to enhance robustness against adversarial examples, as demonstrated through experiments on benchmark HSI datasets.
Deep learning models have shown their great capability for the hyperspectral image (HSI) classification task in recent years. Nevertheless, their vulnerability towards adversarial attacks could not be neglected. In this study, we systematically analyze the influence of adversarial attacks on the HSI classification task for the first time. While existing research of adversarial attacks focuses on the generation of adversarial examples in the RGB domain, the experiments in this study show such adversarial examples could also exist in the hyperspectral domain. Although the difference between the generated adversarial image and the original hyperspectral data is imperceptible to the human visual system, most of the existing state-of-the-art deep learning models could be fooled by the adversarial image to make wrong predictions. To address this challenge, a novel self-attention context network (SACNet) is further proposed. We discover that the global context information contained in HSI can significantly improve the robustness of deep neural networks when confronted with adversarial attacks. Extensive experiments on three benchmark HSI datasets demonstrate that the proposed SACNet possesses stronger resistibility towards adversarial examples compared with the existing state-of-the-art deep learning models.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据