4.6 Article

Generating Adversarial Examples Against Machine Learning-Based Intrusion Detector in Industrial Control Systems

Journal

IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING
Volume 19, Issue 3, Pages 1810-1825

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TDSC.2020.3037500

Keywords

Integrated circuits; Detectors; Reconnaissance; Generative adversarial networks; Protocols; Integrated circuit modeling; Machine learning; Machine learning security; intrusion detection systems; industrial control systems; adversarial examples

Funding

  1. National Key Research and Development Program [2018YFB0803501]
  2. National Natural Science Foundation of China [62073285]
  3. Fundamental Research Funds for the Central Universities

Ask authors/readers for more resources

Deploying machine learning-based intrusion detection systems can enhance the security of industrial control systems, but such models are vulnerable to adversarial attacks. This article investigates the possibility of stealthy cyber attacks on intrusion detection systems and proposes two strategies to enhance their robustness. The experiments conducted on a semi-physical testbed demonstrate the effectiveness of the attacks and the adoption of adversarial training improves the detector's resistance against adversarial examples.
Deploying machine learning (ML)-based intrusion detection systems (IDS) is an effective way to improve the security of industrial control systems (ICS). However, ML models themselves are vulnerable to adversarial examples, generated by deliberately adding subtle perturbation to the input sample that some people are not aware of, causing the model to give a false output with high confidence. In this article, our goal is to investigate the possibility of stealthy cyber attacks towards IDS, including injection attack, function code attack and reconnaissance attack, and enhance its robustness to adversarial attack. However, adversarial algorithms are subject to communication protocol and legal range of data in ICS, unlike only limited by the distance between original samples and newly generated samples in image domain. We propose two strategies - optimal solution attack and GAN attack - oriented to flexibility and volume of data, formulating an optimization problem to find stealthy attacks, where the former is appropriate for not too large and more flexible samples while the latter provides a more efficient solution for larger and not too flexible samples. Finally, we conduct experiments on a semi-physical ICS testbed with a high detection performance ensemble ML-based detector to show the effectiveness of our attacks. The results indicate that new samples of reconnaissance and function code attack produced by both optimal solution and GAN algorithm possess 80 percent higher probability to evade the detector, still maintaining the same attack effect. In the meantime, we adopt adversarial training as a method to defend against adversarial attack. After training on the mixture of orginal dataset and newly generated samples, the detector becomes more robust to adversarial examples.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available