4.7 Article

Fooling deep neural detection networks with adaptive object-oriented adversarial perturbation

期刊

PATTERN RECOGNITION
卷 115, 期 -, 页码 -

出版社

ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2021.107903

关键词

Object detection; Adversarial attack; Adaptive object-oriented perturbation

资金

  1. University of Macau [MYRG2018-00035-FST, MYRG2019-00086-FST]
  2. Science and Technology Development Fund, Macau SAR [0034/2019/AMJ, 0019/2019/A]

向作者/读者索取更多资源

Deep learning excels at complex tasks, but Deep Neural Networks are vulnerable to carefully crafted adversarial perturbations. The AO(2)AM algorithm focuses on object-level adversarial perturbations to fool deep neural object detection networks effectively.
Deep learning has shown superiority in dealing with complicated and professional tasks (e.g., computer vision, audio, and language processing). However, research works have confirmed that Deep Neural Networks (DNNs) are vulnerable to carefully crafted adversarial perturbations, which cause DNNs confusion on specific tasks. In object detection domain, the background has little contributions to object classification, and the crafted adversarial perturbations added to the background do not improve the adversary effect in fooling deep neural detection models yet induce substantial distortions in generated examples. Based on such situation, we introduce an adversarial attack algorithm named Adaptive Object-oriented Adversarial Method (AO(2)AM). It aims to fool deep neural object detection networks with the adversarial examples by applying the adaptive cumulation of object-based gradients and adding the adaptive object-based adversarial perturbations merely onto objects rather than the whole frame of input images. AO(2) AM can effectively make the representations of generated adversarial samples close to the decision boundary in the latent space, and force deep neural detection networks to yield inaccurate locations and false classification in the process of object detection. Compared with existing adversarial attack methods which generate adversarial perturbations acting on the global scale of the original inputs, the adversarial examples produced by AO(2) AM can effectively fool deep neural object detection networks and maintain a high structural similarity with corresponding clean inputs. Performing adversarial attacks on Faster R-CNN, AO(2)AM gains attack success rate (ASR) over 98.00% on pre-processed Pascal VOC 2007&2012 (Val), and reaches SSIM over 0.870. In Fooling SSD, AO(2)AM receives SSIM exceeding 0.980 on L-2 norm constraint. On SSIM and Mean Attack Ratio, AO(2) AM outperforms adversarial attack methods based on global scale perturbations. (C) 2021 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据