4.7 Article

U-Turn: Crafting Adversarial Queries with Opposite-Direction Features

期刊

INTERNATIONAL JOURNAL OF COMPUTER VISION
卷 131, 期 4, 页码 835-854

出版社

SPRINGER
DOI: 10.1007/s11263-022-01737-y

关键词

Adversarial samples; Robustness; Image retrieval; Convolutional neural network; Deep learning

向作者/读者索取更多资源

This paper aims to generate adversarial queries for image retrieval by directly attacking query image features. The proposed opposite-direction feature attack (ODFA) method induces the original image feature in the opposite direction to create adversarial queries. Experimental results on five retrieval datasets demonstrate that ODFA outperforms classifier attack methods in terms of attack success rate, causing true matches to be less visible in the top ranks. Moreover, the method is extendable to multi-scale query inputs and applicable in black-box settings.
This paper aims to craft adversarial queries for image retrieval, which uses image features for similarity measurement. Many commonly used methods are developed in the context of image classification. However, these methods, which attack prediction probabilities, only exert an indirect influence on the image features and are thus found less effective when being applied to the retrieval problem. In designing an attack method specifically for image retrieval, we introduce opposite-direction feature attack (ODFA), a white-box attack approach that directly attacks query image features to generate adversarial queries. As the name implies, the main idea underpinning ODFA is to impel the original image feature to the opposite direction, similar to a U-turn. This simple idea is experimentally evaluated on five retrieval datasets. We show that the adversarial queries generated by ODFA cause true matches no longer to be seen at the top ranks, and the attack success rate is consistently higher than classifier attack methods. In addition, our method of creating adversarial queries can be extended for multi-scale query inputs and is generalizable to other retrieval models without foreknowing their weights, i.e., the black-box setting.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据