4.7 Article

Generating Natural Adversarial Remote Sensing Images

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2021.3110601

关键词

Generators; Training; Perturbation methods; Neural networks; Generative adversarial networks; Security; Inverters; Adversarial examples; deep learning; generative models; remote sensing

资金

  1. OATMIL [ANR-17-CE23-0012]
  2. 3IA Cote d'Azur Investments of French National Research Agency (ANR) [ANR-19-P3IA-0002]
  3. 3rd Programme d'Investissements d'Avenir [ANR-18-EUR0006-02]
  4. Chair Challenging Technology for Responsible Energy
  5. Fondation de l'Ecole polytechnique through Total
  6. Agence Nationale de la Recherche (ANR) [ANR-17-CE23-0012] Funding Source: Agence Nationale de la Recherche (ANR)

向作者/读者索取更多资源

This article discusses how to generate adversarial attack examples in the case of black-box neural networks, and experiments were conducted using a specific method to demonstrate its effectiveness in image generation and modification. A perceptual evaluation with human annotators was also conducted to assess the effectiveness of the proposed method.
Over the last years, remote sensing image (RSI) analysis has started resorting to using deep neural networks to solve most of the commonly faced problems, such as detection, land cover classification, or segmentation. As far as critical decision-making can be based upon the results of RSI analysis, it is important to clearly identify and understand potential security threats occurring in those machine learning algorithms. Notably, it has recently been found that neural networks are particularly sensitive to carefully designed attacks, generally crafted given the full knowledge of the considered deep network. In this article, we consider the more realistic but challenging case where one wants to generate such attacks in the case of a black-box neural network. In this case, only the prediction score of the network is accessible, on a specific dataset. Examples that lure away the network's prediction, while being perceptually similar to real images, are called natural or unrestricted adversarial examples. We present an original method to generate such examples based on a variant of the Wasserstein generative adversarial network. We demonstrate its effectiveness on natural adversarial hyperspectral image generation and image modification for fooling a state-of-the-art detector. Among others, we also conduct a perceptual evaluation with human annotators to better assess the effectiveness of the proposed method. Our code is available for the community: https://github.com/PythonOT/ARWGAN.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据