3.8 Proceedings Paper

Security of Machine Learning-Based Anomaly Detection in Cyber Physical Systems

出版社

IEEE
DOI: 10.1109/ICCCN54977.2022.9868845

关键词

Cyber physical systems; Machine learning; Security; Attacks; Defence; Internet of Things

向作者/读者索取更多资源

With the rise of IoT and AI services, protecting CPS from cyber threats becomes challenging. Machine learning methods are being used for anomaly detection in CPS, but deep learning is vulnerable to adversarial attacks. This study focuses on the impact of adversarial attacks on deep learning-based anomaly detection in CPS and proposes a mitigation approach by retraining models with adversarial samples.
With the emergence of the Internet of Things (IoT) and Artificial Intelligence (AI) services and applications in the Cyber Physical Systems (CPS), the methods of protecting CPS against cyber threats is becoming more and more challenging. Various security solutions are implemented to protect CPS networks from cyber attacks. For instance, Machine Learning (ML) methods have been deployed to automate the process of anomaly detection in CPS environments. The core of ML is deep learning. However, it has been found that deep learning is vulnerable to adversarial attacks. Attackers can launch the attack by applying perturbations to input samples to mislead the model, which results in incorrect predictions and low accuracy. For example, the Fast Gradient Sign Method (FGSM) is a white-box attack that calculates gradient descent oppositely to maximize the loss and generates perturbations by adding the gradient to unpolluted data. In this study, we focus on the impact of adversarial attacks on deep learning-based anomaly detection in CPS networks and implement a mitigation approach against the attack by retraining models using adversarial samples. We use the Bot-IoT and Modbus IoT datasets to represent the two CPS networks. We train deep learning models and generate adversarial samples using these datasets. These datasets are captured from IoT and Industrial IoT (IIoT) networks. They both provide samples of normal and attack activities. The deep learning model trained with these datasets showed high accuracy in detecting attacks. An Artificial Neural Network (ANN) is adopted with one input layer, four intermediate layers, and one output layer. The output layer has two nodes representing the binary classification results. To generate adversarial samples for the experiment, we used a function called the 'fast gradient method' from the Cleverhans library. The experimental result demonstrates the influence of FGSM adversarial samples on the accuracy of the predictions and proves the effectiveness of using the retrained model to defend against adversarial attacks.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据