4.7 Article

An Empirical Study of Adversarial Examples on Remote Sensing Image Scene Classification

Journal

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
Volume 59, Issue 9, Pages 7419-7433

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2021.3051641

Keywords

Remote sensing; Optical sensors; Optical imaging; Feature extraction; Training; Radar polarimetry; Perturbation methods; Adversarial examples; deep neural networks (DNNs); remote sensing image (RSI) scene classification

Funding

  1. National Natural Science Foundation of China [41871364, 41571397, 41671357, 41871302]

Ask authors/readers for more resources

This study tested eight state-of-the-art classification DNNs on six RSI benchmarks and found that adversarial examples can impact remote sensing image scene classification. The seriousness of the adversarial problem in optical data has a negative relationship with the richness of the feature information, while adversarial examples generated from SAR images can easily fool models.
Deep neural networks (DNNs), which learn a hierarchical representation of features, have shown remarkable performance in big data analytics of remote sensing. However, previous research indicates that DNNs are easily spoofed by adversarial examples, which are crafted images with artificial perturbations that fool DNN models toward wrong predictions. To comprehensively evaluate the impact of adversarial examples on the remote sensing image (RSI) scene classification, this study tests eight state-of-the-art classification DNNs on six RSI benchmarks. These data sets include both optical and synthetic-aperture radar (SAR) images of different spectral and spatial resolutions. In the experiment, we create 48 classification scenarios and use four cutting-edge attack algorithms to investigate the influence of the adversarial example on the classification of RSIs. The experimental result shows that the fooling rates of the attacks are all over 98% across the 48 scenarios. We also find that, for the optical data, the seriousness of the adversarial problem has a negative relationship with the richness of the feature information. Besides, adversarial examples generated from SAR images are used easily for fooling the models with an average fooling rate of 76.01%. By analyzing the class distribution of these adversarial examples, we find that the distribution of the misclassifications is not affected by the types of models and attack algorithms-adversarial examples of RSIs of the same class cluster on fixed several classes. The analysis of classes of adversarial examples not only helps us explore the relationships between data set classes but also provides insights for further designing defensive algorithms.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available