4.6 Article

Explainable Unsupervised Machine Learning for Cyber-Physical Systems

Journal

IEEE ACCESS
Volume 9, Issue -, Pages 131824-131843

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2021.3112397

Keywords

Machine learning; Data models; Machine learning algorithms; Prediction algorithms; Self-organizing feature maps; Decision making; Artificial intelligence; Explainable artificial intelligence; self-organizing maps; interpretable machine learning; unsupervised machine learning

Funding

  1. Commonwealth Cyber Initiative, an investment in the advancement of cyber R&D, innovation and workforce development

Ask authors/readers for more resources

Cyber-Physical Systems (CPSs) are crucial in modern infrastructure, but issues of reliability, performance, and security persist. While predictive Machine Learning (ML) models offer opportunities for CPSs, their black-box nature poses challenges for safety-critical systems. To maximize the use of ML in CPSs, explainable unsupervised ML models are necessary.
Cyber-Physical Systems (CPSs) play a critical role in our modern infrastructure due to their capability to connect computing resources with physical systems. As such, topics such as reliability, performance, and security of CPSs continue to receive increased attention from the research community. CPSs produce massive amounts of data, creating opportunities to use predictive Machine Learning (ML) models for performance monitoring and optimization, preventive maintenance, and threat detection. However, the black-box nature of complex ML models is a drawback when used in safety-critical systems such as CPSs. While explainable ML has been an active research area in recent years, much of the work has been focused on supervised learning. As CPSs rapidly produce massive amounts of unlabeled data, relying on supervised learning alone is not sufficient for data-driven decision making in CPSs. Therefore, if we are to maximize the use of ML in CPSs, it is necessary to have explainable unsupervised ML models. In this paper, we outline how unsupervised explainable ML could be used within CPSs. We review the existing work in unsupervised ML, present initial desiderata of explainable unsupervised ML for CPS, and present a Self-Organizing Maps based explainable clustering methodology which generates global and local explanations. We evaluate the fidelity of the generated explanations using feature perturbation techniques. The results show that the proposed method identifies the most important features responsible for the decision-making process of Self-organizing Maps. Further, we demonstrated that explainable Self-Organizing Maps are a strong candidate for explainable unsupervised machine learning by comparing its model capabilities and limitations with current explainable unsupervised methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available