4.5 Article

White-box content camouflage attacks against deep learning

Journal

COMPUTERS & SECURITY
Volume 117, Issue -, Pages -

Publisher

ELSEVIER ADVANCED TECHNOLOGY
DOI: 10.1016/j.cose.2022.102676

Keywords

Deep learning; White-box attack; Pre-processing; Content camouflage; Computer vision

Funding

  1. National Natural Science Foundation of China [62002070]
  2. Science and Technology Planning Project of Guangzhou City [202007010004, 202102021236]

Ask authors/readers for more resources

This paper examines content camouflage attacks on preprocessing modules in deep learning systems and formulates them as an optimization problem using a multi-scale discriminator. Experimental results demonstrate the effectiveness of the proposed attacks against deep learning systems.
Deep learning has achieved remarkable success in a wide range of computer vision tasks. However, recent researches suggest that deep learning systems are vulnerable to a variety of attacks. Security concerns have been raised regarding the training or inference phase of deep learning models in the last few years, and the research field about the vulnerability of the pre-processing components in these models is still developing. In this paper, we systematically examine white-box content camouflage attacks on five types of pre-processing modules in deep learning systems: scaling, sharpening, Gamma correction, contrast adjustment, and saturation adjustment. We assume that an attacker's goal is to generate camouflage examples that show inconsistent visual semantics before and after pre-processing. Under the white-box setting (where the pre-processing algorithms and their parameters are known), we formulate content camouflage attacks as an optimization problem in which perceptual losses in the source and target images are smoothly calculated by a multi-scale discriminator to improve the camouflaging effect of the attack example. We evaluate our content camouflage attacks by conducting a series of experiments on two example groups as well as two real-world datasets, i.e., CIFAR-10 and FER-2013. The experimental results show that with good camouflaging ability, our attacks are effective against deep learning systems, and outperform prevalent scaling camouflage attacks by generating examples with better quality and a higher attack success rate. The proposed camouflage attacks are also extended to the four commonly used pre-processing algorithms, and yield good results. Furthermore, we discuss the effect of varying the parameters of several image pre-processing algorithms under our attacks and analyze' the reasons for their vulnerability. (C) 2022 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available