4.6 Article

Efficient Cyber Attack Detection in Industrial Control Systems Using Lightweight Neural Networks and PCA

Journal

IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING
Volume 19, Issue 4, Pages 2179-2197

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TDSC.2021.3050101

Keywords

Anomaly detection; industrial control systems; convolutional neural networks; autoencoders; frequency analysis; adversarial machine learning; adversarial robustness

Funding

  1. European Union [830927]
  2. Rafael Advanced Defense Systems

Ask authors/readers for more resources

This article presents a method for detecting attacks on industrial control systems (ICSs) using simple and lightweight neural networks. The method utilizes 1D convolutional neural networks and autoencoders to analyze data in both the time and frequency domains, and discusses the advantages and disadvantages of each representation approach. Experimental results show that the proposed method achieves high detection rates and is robust against adversarial attacks.
Industrial control systems (ICSs) are widely used and vital to industry and society. Their failure can have severe impact on both the economy and human life. Hence, these systems have become an attractive target for physical and cyber attacks alike. In this article, we examine an attack detection method based on simple and lightweight neural networks, namely, 1D convolutional neural networks and autoencoders. We apply these networks to both the time and frequency domains of the data and discuss the pros and cons of each representation approach. The suggested method is evaluated on three popular public datasets, and detection rates matching or exceeding previously published detection results are achieved, while demonstrating a small footprint, short training and detection times, and generality. We also show the effectiveness of PCA, which, given proper data preprocessing and feature selection, can provide high attack detection rates in many settings. Finally, we study the proposed method's robustness against adversarial attacks that exploit inherent blind spots of neural networks to evade detection while achieving their intended physical effect. Our results show that the proposed method is robust to such evasion attacks: in order to evade detection, the attacker is forced to sacrifice the desired physical impact on the system.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available