4.7 Article

Assessing the Threat of Adversarial Examples on Deep Neural Networks for Remote Sensing Scene Classification: Attacks and Defenses

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2020.2999962

关键词

Remote sensing; Neural networks; Deep learning; Perturbation methods; Feature extraction; Task analysis; Image color analysis; Adversarial attack; adversarial defense; adversarial example; deep learning; remote sensing; scene classification

资金

  1. National Natural Science Foundation of China [41431175, 41871243, 61822113]
  2. National Key Research and Development Program of China [2018YFA060550]
  3. Fundamental Research Funds for the Central Universities [41300082]
  4. Science and Technology Major Project of Hubei Province (NextGeneration AI Technologies) [2019AEA170]

向作者/读者索取更多资源

This article systematically analyzes the threat of adversarial examples on deep neural networks for remote sensing scene classification, showing that most deep learning models are sensitive to adversarial perturbations but the adversarial training strategy helps alleviate their vulnerability.
Deep neural networks, which can learn the representative and discriminative features from data in a hierarchical manner, have achieved state-of-the-art performance in the remote sensing scene classification task. Despite the great success that deep learning algorithms have obtained, their vulnerability toward adversarial examples deserves our special attention. In this article, we systematically analyze the threat of adversarial examples on deep neural networks for remote sensing scene classification. Both targeted and untargeted attacks are performed to generate subtle adversarial perturbations, which are imperceptible to a human observer but may easily fool the deep learning models. Simply adding these perturbations to the original high-resolution remote sensing (HRRS) images, adversarial examples can be generated, and there are only slight differences between the adversarial examples and the original ones. An intriguing discovery in our study shows that most of these adversarial examples may be misclassified into the wrong category by the state-of-the-art deep neural networks with very high confidence. This phenomenon, undoubtedly, may limit the practical deployment of these deep learning models in the safety-critical remote sensing field. To address this problem, the adversarial training strategy is further investigated in this article, which significantly increases the resistibility of deep models toward adversarial examples. Extensive experiments on three benchmark HRRS image data sets demonstrate that while most of the well-known deep neural networks are sensitive to adversarial perturbations, the adversarial training strategy helps to alleviate their vulnerability toward adversarial examples.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据