4.6 Article

Generating Adversarial Examples Against Machine Learning-Based Intrusion Detector in Industrial Control Systems

期刊

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TDSC.2020.3037500

关键词

Integrated circuits; Detectors; Reconnaissance; Generative adversarial networks; Protocols; Integrated circuit modeling; Machine learning; Machine learning security; intrusion detection systems; industrial control systems; adversarial examples

资金

  1. National Key Research and Development Program [2018YFB0803501]
  2. National Natural Science Foundation of China [62073285]
  3. Fundamental Research Funds for the Central Universities

向作者/读者索取更多资源

Deploying machine learning-based intrusion detection systems can enhance the security of industrial control systems, but such models are vulnerable to adversarial attacks. This article investigates the possibility of stealthy cyber attacks on intrusion detection systems and proposes two strategies to enhance their robustness. The experiments conducted on a semi-physical testbed demonstrate the effectiveness of the attacks and the adoption of adversarial training improves the detector's resistance against adversarial examples.
Deploying machine learning (ML)-based intrusion detection systems (IDS) is an effective way to improve the security of industrial control systems (ICS). However, ML models themselves are vulnerable to adversarial examples, generated by deliberately adding subtle perturbation to the input sample that some people are not aware of, causing the model to give a false output with high confidence. In this article, our goal is to investigate the possibility of stealthy cyber attacks towards IDS, including injection attack, function code attack and reconnaissance attack, and enhance its robustness to adversarial attack. However, adversarial algorithms are subject to communication protocol and legal range of data in ICS, unlike only limited by the distance between original samples and newly generated samples in image domain. We propose two strategies - optimal solution attack and GAN attack - oriented to flexibility and volume of data, formulating an optimization problem to find stealthy attacks, where the former is appropriate for not too large and more flexible samples while the latter provides a more efficient solution for larger and not too flexible samples. Finally, we conduct experiments on a semi-physical ICS testbed with a high detection performance ensemble ML-based detector to show the effectiveness of our attacks. The results indicate that new samples of reconnaissance and function code attack produced by both optimal solution and GAN algorithm possess 80 percent higher probability to evade the detector, still maintaining the same attack effect. In the meantime, we adopt adversarial training as a method to defend against adversarial attack. After training on the mixture of orginal dataset and newly generated samples, the detector becomes more robust to adversarial examples.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据