Journal
INFORMATION SCIENCES
Volume 546, Issue -, Pages 596-607Publisher
ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2020.05.089
Keywords
Object detection; Boundary attack; Discrete cosine transform (DCT); Black-box; Adversarial example
Categories
Funding
- National Natural Science Foundation of China [61876019]
Ask authors/readers for more resources
Deep learning models are widely used in various fields, but vulnerable to security threats. This study presents a novel targeted attack method against state-of-the-art object detection models, showing significant effects.
Deep learning models are being widely used in almost every field of computing and information processing. The advantages offered by these models are unparalleled, however, similar to any other computing discipline, they are also vulnerable to security threats. compromised deep neural network can significantly impact its robustness and accuracy. In this work, we present a novel targeted attack method against state-of-the-art object detection models YOLO v3 and AWS Rekognition in a black-box environment. We present an improved attack method using Discrete Cosine Transform based on boundary attack plus plus mechanism, and apply it on attacking object detectors offline and online. querying the victim detection models along with transforming the images from the spatial domain into the frequency domain, we ensure that any specified object in an image can successfully recognized as any other desired class by YOLO v3 and AWS Rekognition. The results prove that our method has significant boosting effects on boundary attacks in offline and online object detection systems. (c) 2020 Elsevier Inc. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available