Journal
IEEE COMMUNICATIONS LETTERS
Volume 23, Issue 5, Pages 847-850Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LCOMM.2019.2901469
Keywords
Adversarial attacks; autoencoder systems; deep learning; wireless security; end-to-end learning
Categories
Funding
- Swedish Foundation for Strategic Research (SSF)
- Security-Link
Ask authors/readers for more resources
We show that end-to-end learning of communication systems through deep neural network autoencoders can be extremely vulnerable to physical adversarial attacks. Specifically, we elaborate how an attacker can craft effective physical black-box adversarial attacks. Due to the openness (broadcast nature) of the wireless channel, an adversary transmitter can increase the block-error-rate of a communication system by orders of magnitude by transmitting a well-designed perturbation signal over the channel. We reveal that the adversarial attacks are more destructive than the jamming attacks. We also show that classical coding schemes are more robust than the autoencoders against both adversarial and jamming attacks.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available