4.7 Article

Channel-Aware Adversarial Attacks Against Deep Learning-Based Wireless Signal Classifiers

Journal

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS
Volume 21, Issue 6, Pages 3868-3880

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TWC.2021.3124855

Keywords

Receivers; Perturbation methods; Wireless communication; Modulation; Transmitters; Wireless sensor networks; Sensors; Modulation classification; deep learning; adversarial machine learning; adversarial attack; certified defense

Funding

  1. U.S. Army Research Office [W911NF-17-C-0090]

Ask authors/readers for more resources

This paper introduces channel-aware adversarial attacks against deep learning-based wireless signal classifiers, exploring the failure of evasion attacks without considering channel effects and presenting realistic attacks by considering channel effects. It also proposes a certified defense based on randomized smoothing to make the modulation classifier robust to adversarial attacks.
This paper presents channel-aware adversarial attacks against deep learning-based wireless signal classifiers. There is a transmitter that transmits signals with different modulation types. A deep neural network is used at each receiver to classify its over-the-air received signals to modulation types. In the meantime, an adversary transmits an adversarial perturbation (subject to a power budget) to fool receivers into making errors in classifying signals that are received as superpositions of transmitted signals and adversarial perturbations. First, these evasion attacks are shown to fail when channels are not considered in designing adversarial perturbations. Then, realistic attacks are presented by considering channel effects from the adversary to each receiver. After showing that a channel-aware attack is selective (i.e., it affects only the receiver whose channel is considered in the perturbation design), a broadcast adversarial attack is presented by crafting a common adversarial perturbation to simultaneously fool classifiers at different receivers. The major vulnerability of modulation classifiers to over-the-air adversarial attacks is shown by accounting for different levels of information available about the channel, the transmitter input, and the classifier model. Finally, a certified defense based on randomized smoothing that augments training data with noise is introduced to make the modulation classifier robust to adversarial perturbations.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available