4.6 Article

Attack Selectivity of Adversarial Examples in Remote Sensing Image Scene Classification

期刊

IEEE ACCESS
卷 8, 期 -, 页码 137477-137489

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2020.3011639

关键词

Feature extraction; Data models; Remote sensing; Computational modeling; Perturbation methods; Security; Robustness; Remote sensing image; deep learning; convolutional neural network; adversarial example

资金

  1. National Natural Science Foundation of China [41871364, 41871302, 41871276, 41861048, 41771458]

向作者/读者索取更多资源

Remote sensing image (RSI) scene classification is the foundation and important technology of ground object detection, land use management and geographic analysis. During recent years, convolutional neural networks (CNNs) have achieved significant success and are widely applied in RSI scene classification. However, crafted images that serve as adversarial examples can potentially fool CNNs with high confidence and are hard for human eyes to interpret. For the increasing security and robust requirements of RSI scene classification, the adversarial example problem poses a serious problem for the classification results derived from systems using CNN models, which has not been fully recognized by previous research. In this study, to explore the properties of adversarial examples of RSI scene classification, we create different scenarios by testing two major attack algorithms (i.e., the fast gradient sign method (FGSM) and basic iterative method (BIM)) trained on different RSI benchmark datasets to fool CNNs (i.e., InceptionV1, ResNet and a simple CNN). In the experiment, our results show that CNNs of RSI scene classification are also vulnerable to adversarial examples, and some of them have a fooling rate of over 80%. These adversarial examples are affected by the architecture of CNNs and the type of RSI dataset. InceptionV1 has a fooling rate of less than 5%, which is lower than the others. Adversarial examples generated on the UCM dataset are easier than other datasets. Importantly, we also find that the classes of adversarial examples have an attack selectivity property. Misclassifications of adversarial examples of RSIs are related to the similarity of the original classes in the CNN feature space. Attack selectivity reveals potential classes of adversarial examples and provides insights into the design of defensive algorithms in future research.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据