4.6 Article

Physical Adversarial Attacks Against End-to-End Autoencoder Communication Systems

Journal

IEEE COMMUNICATIONS LETTERS
Volume 23, Issue 5, Pages 847-850

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LCOMM.2019.2901469

Keywords

Adversarial attacks; autoencoder systems; deep learning; wireless security; end-to-end learning

Funding

  1. Swedish Foundation for Strategic Research (SSF)
  2. Security-Link

Ask authors/readers for more resources

We show that end-to-end learning of communication systems through deep neural network autoencoders can be extremely vulnerable to physical adversarial attacks. Specifically, we elaborate how an attacker can craft effective physical black-box adversarial attacks. Due to the openness (broadcast nature) of the wireless channel, an adversary transmitter can increase the block-error-rate of a communication system by orders of magnitude by transmitting a well-designed perturbation signal over the channel. We reveal that the adversarial attacks are more destructive than the jamming attacks. We also show that classical coding schemes are more robust than the autoencoders against both adversarial and jamming attacks.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available