4.7 Article

An Empirical Study of Adversarial Examples on Remote Sensing Image Scene Classification

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2021.3051641

关键词

Remote sensing; Optical sensors; Optical imaging; Feature extraction; Training; Radar polarimetry; Perturbation methods; Adversarial examples; deep neural networks (DNNs); remote sensing image (RSI) scene classification

资金

  1. National Natural Science Foundation of China [41871364, 41571397, 41671357, 41871302]

向作者/读者索取更多资源

This study tested eight state-of-the-art classification DNNs on six RSI benchmarks and found that adversarial examples can impact remote sensing image scene classification. The seriousness of the adversarial problem in optical data has a negative relationship with the richness of the feature information, while adversarial examples generated from SAR images can easily fool models.
Deep neural networks (DNNs), which learn a hierarchical representation of features, have shown remarkable performance in big data analytics of remote sensing. However, previous research indicates that DNNs are easily spoofed by adversarial examples, which are crafted images with artificial perturbations that fool DNN models toward wrong predictions. To comprehensively evaluate the impact of adversarial examples on the remote sensing image (RSI) scene classification, this study tests eight state-of-the-art classification DNNs on six RSI benchmarks. These data sets include both optical and synthetic-aperture radar (SAR) images of different spectral and spatial resolutions. In the experiment, we create 48 classification scenarios and use four cutting-edge attack algorithms to investigate the influence of the adversarial example on the classification of RSIs. The experimental result shows that the fooling rates of the attacks are all over 98% across the 48 scenarios. We also find that, for the optical data, the seriousness of the adversarial problem has a negative relationship with the richness of the feature information. Besides, adversarial examples generated from SAR images are used easily for fooling the models with an average fooling rate of 76.01%. By analyzing the class distribution of these adversarial examples, we find that the distribution of the misclassifications is not affected by the types of models and attack algorithms-adversarial examples of RSIs of the same class cluster on fixed several classes. The analysis of classes of adversarial examples not only helps us explore the relationships between data set classes but also provides insights for further designing defensive algorithms.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据