4.6 Article

Adversarial Neon Beam: A light-based physical attack to DNNs

期刊

出版社

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.cviu.2023.103877

关键词

DNNs; Black-box light-based physical attack; AdvNB; Effectiveness; Stealthiness; Robustness

向作者/读者索取更多资源

In this study, we introduce a pioneering black-box light-based physical attack called Adversarial Neon Beam (AdvNB). Our method excels in attack modeling, efficient attack simulation, and robust optimization, striking a balance between robustness and efficiency. Through rigorous evaluation, we achieve impressive attack success rates in both digital and real-world scenarios. AdvNB demonstrates its stealthiness through comparisons with baseline samples and consistently achieves high success rates when targeting advanced DNN models.
In the physical world, the interplay of light and shadow can significantly impact the performance of deep neural networks (DNNs), leading to substantial consequences, as exemplified by incidents such as the Tesla self-driving car collision caused by an unexpected light flash. Traditional methods involving stickers for physical attacks have inherent limitations, particularly in terms of stealthiness. In response, researchers have delved into light -based perturbations, including lasers and projectors, with the objective of achieving stealthy attacks. However, these efforts have often fallen short in terms of achieving robustness.In our study, we introduce a pioneering black-box light-based physical attack known as Adversarial Neon Beam (AdvNB). Our method stands out for its excellence in attack modeling, efficient attack simulation, and robust optimization, striking a harmonious balance between robustness and efficiency. We employ effectiveness, stealthiness, and robustness as the key metrics to evaluate the proposed AdvNB. Through rigorous evaluation, we attain an impressive 84.40% attack success rate in digital attacks, requiring an average of 189.70 queries. In real-world scenarios, our method excels with a 100% attack success rate indoors and a commendable 81.82% success rate outdoors. AdvNB demonstrates its stealthiness through comparisons with baseline samples, and it further underscores its robustness by consistently achieving a success rate exceeding 80% when targeting advanced DNN models. We carry out a comprehensive analysis of the proposed attack and note that the generated perturbations share similarities with objects present in the dataset or real-world settings. Additionally, we implement adversarial defense mechanisms against AdvNB. Given its superior performance compared to baseline methods as a light-based attack, we advocate for its broader acknowledgment and recommend its adoption as a reference point for future research and practical applications. Our code and data can be accessed from the following link: https://github.com/ChengYinHu/AdvNB.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据