4.7 Article

Generating Natural Adversarial Remote Sensing Images

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2021.3110601

Keywords

Generators; Training; Perturbation methods; Neural networks; Generative adversarial networks; Security; Inverters; Adversarial examples; deep learning; generative models; remote sensing

Funding

  1. OATMIL [ANR-17-CE23-0012]
  2. 3IA Cote d'Azur Investments of French National Research Agency (ANR) [ANR-19-P3IA-0002]
  3. 3rd Programme d'Investissements d'Avenir [ANR-18-EUR0006-02]
  4. Chair Challenging Technology for Responsible Energy
  5. Fondation de l'Ecole polytechnique through Total
  6. Agence Nationale de la Recherche (ANR) [ANR-17-CE23-0012] Funding Source: Agence Nationale de la Recherche (ANR)

Ask authors/readers for more resources

This article discusses how to generate adversarial attack examples in the case of black-box neural networks, and experiments were conducted using a specific method to demonstrate its effectiveness in image generation and modification. A perceptual evaluation with human annotators was also conducted to assess the effectiveness of the proposed method.
Over the last years, remote sensing image (RSI) analysis has started resorting to using deep neural networks to solve most of the commonly faced problems, such as detection, land cover classification, or segmentation. As far as critical decision-making can be based upon the results of RSI analysis, it is important to clearly identify and understand potential security threats occurring in those machine learning algorithms. Notably, it has recently been found that neural networks are particularly sensitive to carefully designed attacks, generally crafted given the full knowledge of the considered deep network. In this article, we consider the more realistic but challenging case where one wants to generate such attacks in the case of a black-box neural network. In this case, only the prediction score of the network is accessible, on a specific dataset. Examples that lure away the network's prediction, while being perceptually similar to real images, are called natural or unrestricted adversarial examples. We present an original method to generate such examples based on a variant of the Wasserstein generative adversarial network. We demonstrate its effectiveness on natural adversarial hyperspectral image generation and image modification for fooling a state-of-the-art detector. Among others, we also conduct a perceptual evaluation with human annotators to better assess the effectiveness of the proposed method. Our code is available for the community: https://github.com/PythonOT/ARWGAN.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available