4.7 Article

DETECTSEC: Evaluating the robustness of object detection models to adversarial attacks

Journal

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS
Volume 37, Issue 9, Pages 6463-6492

Publisher

WILEY-HINDAWI
DOI: 10.1002/int.22851

Keywords

adversarial attack; deep learning; neural network; object detection; robustness evaluation

Ask authors/readers for more resources

This paper presents DetectSec, a platform for analyzing the robustness of object detection models. It conducts a thorough evaluation of adversarial attacks on 18 standard object detection models and compares the effectiveness of different defense strategies. The findings highlight the differences between adversarial attacks and defenses in object detection tasks compared to image classification tasks, and provide insights for understanding and defending against such attacks.
Despite their tremendous success in various machine learning tasks, deep neural networks (DNNs) are inherently vulnerable to adversarial examples, which are maliciously crafted inputs to cause DNNs to misbehave. Intensive research has been conducted on this phenomenon in simple tasks (e.g., image classification). However, little is known about this adversarial vulnerability for object detection, a much more complicated task, which often requires specialized DNNs and multiple additional components. In this paper, we present DetectSec, a uniform platform for robustness analysis of object detection models. Currently, DetectSec implements 13 representative adversarial attacks with 7 utility metrics and 13 defenses on 18 standard object detection models. Leveraging DetectSec, we conduct the first rigorous evaluation of adversarial attacks on the state-of-the-art object detection models. We analyze the impact of the factors including DNN architecture and capacity on the model robustness. We show that many conclusions about adversarial attacks and defenses in image classification tasks do not transfer to object detection tasks, for example, the targeted attack is stronger than the untargeted attack for two-stage detectors. Our findings will aid future efforts in understanding and defending against adversarial attacks in complicated tasks. In addition, we compare the robustness of different detection models and discuss their relative strengths and weaknesses. The platform DetectSec will be open source as a unique facility for further research on adversarial attacks and defenses in object detection tasks.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available