4.7 Article

Query efficient black-box adversarial attack on deep neural networks

期刊

PATTERN RECOGNITION
卷 133, 期 -, 页码 -

出版社

ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2022.109037

关键词

Black-box adversarial attack; Adversarial distribution; Query efficiency; Neural process

向作者/读者索取更多资源

This paper proposes a Neural Process based black-box adversarial attack (NP-Attack), which utilizes image structure information and surrogate models to significantly reduce the query counts in black-box settings.
Deep neural networks (DNNs) have demonstrated excellent performance on various tasks, yet they are under the risk of adversarial examples that can be easily generated when the target model is accessible to an attacker (white-box setting). As plenty of machine learning models have been deployed via online services that only provide query outputs from inaccessible models (e.g., Google Cloud Vision API2), black -box adversarial attacks raise critical security concerns in practice rather than white-box ones. However, existing query-based black-box adversarial attacks often require excessive model queries to maintain a high attack success rate. Therefore, in order to improve query efficiency, we explore the distribution of adversarial examples around benign inputs with the help of image structure information characterized by a Neural Process, and propose a Neural Process based black-box adversarial attack (NP-Attack) in this paper. Our proposed NP-Attack could be further boosted when applied with surrogate models or tiling tricks. Extensive experiments show that NP-Attack could greatly decrease the query counts under the black-box setting.(c) 2022 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据