4.5 Article

A New Lightweight In Situ Adversarial Sample Detector for Edge Deep Neural Network

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JETCAS.2021.3076101

关键词

Hardware; Artificial intelligence; Computational modeling; Deep learning; Perturbation methods; Image edge detection; Detectors; Deep neural network; deep learning accelerator; edge AI; adversarial examples; AI security

资金

  1. National Research Foundation, Singapore, through the National Cybersecurity Research and Development Programme/Cyber-Hardware Forensic and Assurance Evaluation Research and Development Programme [CHFA-GC1-AW01]

向作者/读者索取更多资源

The flourishing of the Internet of Things has led to a resurgence of on-premise computing for data analysis, along with the development of hardware accelerators and open-source AI model compilers to support edge AI applications. However, this paradigm shift in deep learning computations has increased the vulnerability of deep neural networks against adversarial attacks.
The flourishing of Internet of Things (IoT) has rekindled on-premise computing to allow data to be analyzed closer to the source. To support edge Artificial Intelligence (AI), hardware accelerators, open-source AI model compilers and commercially available toolkits have evolved to facilitate the development and deployment of applications that use AI at its core. This paradigm shift in deep learning computations does not, however, reduce the vulnerability of deep neural networks (DNN) against adversarial attacks but introduces a difficult catch-up. This is because existing methodologies rely mainly on off-line analysis to detect adversarial inputs, assuming that the deep learning model is implemented on a 32-bit floating-point graphical processing unit (GPU) instance. In this paper, we propose a new hardware-oriented approach for in-situ detection of adversarial inputs feeding through a spatial DNN accelerator architecture or a third-party DNN Intellectual Property (IP) implemented on the edge. Our method exploits controlled glitch injection into the clock signal of the DNN accelerator to maximize the information gain for the discrimination of adversarial and benign inputs. A light gradient boosting machine (lightGBM) is constructed by analyzing the prediction probability of unmutated and mutated models and the label change inconsistency between the adversarial and benign samples in the training dataset. With negligibly small hardware overhead, the glitch injection circuit and the trained lightGBM detector can be easily implemented alongside with the deep learning model on a Xilinx ZU9EG chip. The effectiveness of the proposed detector is validated against four state-of-the-art adversarial attacks on two different types and scales of DNN models, VGG16 and ResNet50, for a thousand-class visual object recognition application. The results show a significant increase in true positive rate and a substantial reduction in false positive rate on the Fast Gradient Sign Method (FGSM), Iterative-FGSM (I-FGSM), C&W and universal perturbation attacks compared with modern software-oriented adversarial sample detection methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据