4.7 Article

Multi-Source Adversarial Sample Attack on Autonomous Vehicles

Journal

IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY
Volume 70, Issue 3, Pages 2822-2835

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TVT.2021.3061065

Keywords

Laser radar; Autonomous vehicles; Data models; Vehicular ad hoc networks; Image segmentation; Correlation; Training; Adversarial examples; generative adversarial networks; multi-source data; vehicular ad hoc networks

Funding

  1. National Science Foundation of U.S. [1741277, 1829674, 1704287, 2011845, 1912753]
  2. Directorate For Engineering
  3. Div Of Electrical, Commun & Cyber Sys [2011845] Funding Source: National Science Foundation
  4. Division Of Graduate Education
  5. Direct For Education and Human Resources [1912753] Funding Source: National Science Foundation

Ask authors/readers for more resources

Deep learning performs well in object detection and classification for autonomous vehicles, but is vulnerable to adversarial samples. Two multi-source adversarial sample attack models have been proposed to effectively break down the perception systems of autonomous vehicles.
Deep learning has an impressive performance of object detection and classification for autonomous vehicles. Nevertheless, the essential vulnerability of deep learning models to adversarial samples makes the autonomous vehicles suffer severe security and safety issues. Although a number of works have been proposed to study adversarial samples, only a few of them are designated for the scenario of autonomous vehicles. Moreover, the state-of-the-art attack models only focus on a single data source without considering the correlation among multiple data sources. To fill this blank, we propose two multi-source adversarial sample attack models, including the parallel attack model and the fusion attack model to simultaneously attack the image and LiDAR perception systems in the autonomous vehicles. In the parallel attack model, adversarial samples are generated from the original image and LiDAR data separately. In the fusion attack model, the adversarial samples of image and LiDAR can be generated from a low-dimension vector at the same time by fully exploring data correlation for data fusion and adversarial sample generation. Through comprehensive real-data experiments, we validate that our proposed models are more powerful and efficient to break down the perception systems of autonomous vehicles compared with the state-of-the-art. Furthermore, we simulate possible attack scenarios in Vehicular Ad hoc Networks (VANETs) to evaluate the attack performance of our proposed methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available