Journal
PATTERN RECOGNITION
Volume 133, Issue -, Pages -Publisher
ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2022.109037
Keywords
Black-box adversarial attack; Adversarial distribution; Query efficiency; Neural process
Ask authors/readers for more resources
This paper proposes a Neural Process based black-box adversarial attack (NP-Attack), which utilizes image structure information and surrogate models to significantly reduce the query counts in black-box settings.
Deep neural networks (DNNs) have demonstrated excellent performance on various tasks, yet they are under the risk of adversarial examples that can be easily generated when the target model is accessible to an attacker (white-box setting). As plenty of machine learning models have been deployed via online services that only provide query outputs from inaccessible models (e.g., Google Cloud Vision API2), black -box adversarial attacks raise critical security concerns in practice rather than white-box ones. However, existing query-based black-box adversarial attacks often require excessive model queries to maintain a high attack success rate. Therefore, in order to improve query efficiency, we explore the distribution of adversarial examples around benign inputs with the help of image structure information characterized by a Neural Process, and propose a Neural Process based black-box adversarial attack (NP-Attack) in this paper. Our proposed NP-Attack could be further boosted when applied with surrogate models or tiling tricks. Extensive experiments show that NP-Attack could greatly decrease the query counts under the black-box setting.(c) 2022 Elsevier Ltd. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available