4.8 Article

TRUST XAI: Model-Agnostic Explanations for AI With a Case Study on IIoT Security

Journal

IEEE INTERNET OF THINGS JOURNAL
Volume 10, Issue 4, Pages 2967-2978

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JIOT.2021.3122019

Keywords

Artificial intelligence; Industrial Internet of Things; Numerical models; Mathematical models; Computational modeling; Data models; Predictive models; Artificial intelligence (AI); cybersecurity; explainable AI (XAI); Industrial Internet of Things (IIoT); machine learning (ML); statistical modeling; trustworthy AI

Ask authors/readers for more resources

Despite the challenges in generating trust, explainable AI (XAI) has emerged to address the black box nature of artificial intelligence (AI). In this study, a universal XAI model called TRUST was proposed, which is model-agnostic, high performing, and suitable for numerical applications. The TRUST model utilizes statistical theory to model the AI's outputs and provides explanations by ranking influential variables and determining the likelihood of new samples belonging to each class using multimodal Gaussian distributions (MMG). Results from a case study on IIoT cybersecurity demonstrate TRUST's effectiveness with an average success rate of 98% and superiority over the popular XAI model LIME in terms of performance, speed, and explainability.
Despite artificial intelligence (AI)'s significant growth, its black box nature creates challenges in generating adequate trust. Thus, it is seldom utilized as a standalone unit in IoT high-risk applications, such as critical industrial infrastructures, medical systems, financial applications, etc. Explainable AI (XAI) has emerged to help with this problem. However, designing appropriately fast and accurate XAI is still challenging, especially in numerical applications. Here, we propose a universal XAI model, named the transparency relying upon statistical theory (TRUST), which is model-agnostic, high performing, and suitable for numerical applications. Simply put, TRUST XAI models the statistical behavior of the AI's outputs in an AI-based system. Factor analysis is used to transform the input features into a new set of latent variables. We use mutual information (MI) to rank these variables and pick only the most influential ones on the AI's outputs and call them representatives of the classes. Then, we use multimodal Gaussian (MMG) distributions to determine the likelihood of any new sample belonging to each class. We demonstrate the effectiveness of TRUST in a case study on cybersecurity of the Industrial Internet of Things (IIoT) using three different cybersecurity data sets. As IIoT is a prominent application that deals with numerical data. The results show that TRUST XAI provides explanations for new random samples with an average success rate of 98%. Compared with local interpretable model-agnostic explanations (LIME), a popular XAI model, TRUST is shown to be superior in the context of performance, speed, and the method of explainability. In the end, we also show how TRUST is explained to the user.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available