3.8 Proceedings Paper

Cross-modal Adversarial Reprogramming

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/WACV51458.2022.00295

Keywords

-

Funding

  1. ARO [W911NF1910317]
  2. SRC [2899.001]
  3. DoD UCR [W911NF2020267 (MCA S-001364)]
  4. U.S. Department of Defense (DOD) [W911NF1910317] Funding Source: U.S. Department of Defense (DOD)

Ask authors/readers for more resources

With adversarial reprogramming, pre-trained image classification networks can be repurposed for Natural Language Processing (NLP) and other sequence classification tasks without modifying the network architecture or parameters.
With the abundance of large-scale deep learning models, it has become possible to repurpose pre-trained networks for new tasks. Recent works on adversarial reprogramming have shown that it is possible to repurpose neural networks for alternate tasks without modifying the network architecture or parameters. However these works only consider original and target tasks within the same data domain. In this work, we broaden the scope of adversarial reprogramming beyond the data modality of the original task. We analyze the feasibility of adversarially repurposing image classification neural networks for Natural Language Processing (NLP) and other sequence classification tasks. We design an efficient adversarial program that maps a sequence of discrete tokens into an image which can be classified to the desired class by an image classification model. We demonstrate that by using highly efficient adversarial programs, we can reprogram image classifiers to achieve competitive performance on a variety of text and sequence classification benchmarks without retraining the network.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available