4.7 Article

DETECTSEC: Evaluating the robustness of object detection models to adversarial attacks

期刊

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS
卷 37, 期 9, 页码 6463-6492

出版社

WILEY-HINDAWI
DOI: 10.1002/int.22851

关键词

adversarial attack; deep learning; neural network; object detection; robustness evaluation

向作者/读者索取更多资源

This paper presents DetectSec, a platform for analyzing the robustness of object detection models. It conducts a thorough evaluation of adversarial attacks on 18 standard object detection models and compares the effectiveness of different defense strategies. The findings highlight the differences between adversarial attacks and defenses in object detection tasks compared to image classification tasks, and provide insights for understanding and defending against such attacks.
Despite their tremendous success in various machine learning tasks, deep neural networks (DNNs) are inherently vulnerable to adversarial examples, which are maliciously crafted inputs to cause DNNs to misbehave. Intensive research has been conducted on this phenomenon in simple tasks (e.g., image classification). However, little is known about this adversarial vulnerability for object detection, a much more complicated task, which often requires specialized DNNs and multiple additional components. In this paper, we present DetectSec, a uniform platform for robustness analysis of object detection models. Currently, DetectSec implements 13 representative adversarial attacks with 7 utility metrics and 13 defenses on 18 standard object detection models. Leveraging DetectSec, we conduct the first rigorous evaluation of adversarial attacks on the state-of-the-art object detection models. We analyze the impact of the factors including DNN architecture and capacity on the model robustness. We show that many conclusions about adversarial attacks and defenses in image classification tasks do not transfer to object detection tasks, for example, the targeted attack is stronger than the untargeted attack for two-stage detectors. Our findings will aid future efforts in understanding and defending against adversarial attacks in complicated tasks. In addition, we compare the robustness of different detection models and discuss their relative strengths and weaknesses. The platform DetectSec will be open source as a unique facility for further research on adversarial attacks and defenses in object detection tasks.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据