4.5 Article

Neural network laundering: Removing black-box backdoor watermarks from deep neural networks

Journal

COMPUTERS & SECURITY
Volume 106, Issue -, Pages -

Publisher

ELSEVIER ADVANCED TECHNOLOGY
DOI: 10.1016/j.cose.2021.102277

Keywords

Neural networks; Intellectual property; Machine learning; Watermarking; Backdoors

Funding

  1. ICT RD Programs [2017-0-00545]
  2. ITRC Support Program [IITP-2019-2015-0-00403]
  3. National Research Foundation of Korea [4199990114441] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)

Ask authors/readers for more resources

This study investigates copyright protection for neural networks and proposes an algorithm to remove backdoor watermarks from neural networks. The research shows that the robustness of existing watermark methods is weaker than originally claimed, and the algorithm is feasible in more complex tasks and realistic scenarios.
Creating a state-of-the-art deep-learning system requires vast amounts of data, expertise, and hardware, yet research into copyright protection for neural networks has been limited. One of the main methods for achieving such protection involves relying on the susceptibility of neural networks to backdoor attacks in order to inject a watermark into the network, but the robustness of these tactics has been primarily evaluated against pruning, fine-tuning, and model inversion attacks. In this work, we propose an offensive neural network laundering algorithm to remove these backdoor watermarks from neural networks even when the adversary has no prior knowledge of the structure of the watermark. We can effectively remove watermarks used for recent defense or copyright protection mechanisms while retaining test accuracies on the target task above 97% and 80% for both MNIST and CIFAR-10, respectively. For all watermarking methods addressed in this paper, we find that the robustness of the watermark is significantly weaker than the original claims. We also demonstrate the feasibility of our algorithm in more complex tasks as well as in more realistic scenarios where the adversary can carry out efficient laundering attacks using less than 1% of the original training set size, demonstrating that existing watermark-embedding procedures are not sufficient to reach their claims. (c) 2021 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available