4.8 Article

Measuring Explainability and Trustworthiness of Power Quality Disturbances Classifiers Using XAI-Explainable Artificial Intelligence

期刊

IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS
卷 18, 期 8, 页码 5127-5137

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TII.2021.3126111

关键词

Convolution; Power quality; Machine learning algorithms; Classification algorithms; Power measurement; Measurement; Kernel; Convolutional neural network (CNN); deep-learning (DL); energy; evaluation metrics; explainable artificial intelligence (XAI); power quality disturbances (PQDs); power

资金

  1. Zuckerman Fund for Interdisciplinary Research in Machine Learning and Artificial Intelligence at the Technion
  2. Technion Center for Machine Learning and Intelligent Systems (MLIS)
  3. Nancy and Stephen Grand Technion Energy Program (GTEP)
  4. Guy Sella Memorial Project
  5. Israel Science Foundation [1227/18, 447/20]

向作者/读者索取更多资源

This article proposes a method that uses explainable artificial intelligence to explain the outputs of PQD classifiers, making the results more transparent for experts to make informed decisions.
Advanced machine learning techniques have recently demonstrated outstanding performance when applied to power quality disturbance (PQD) classification. Nevertheless, a possible problem is that power experts may find it hard to trust the results of such algorithms, if they do not fully understand the reasons for their outputs. In this light, this article suggests a method that explains the outputs of PQD classifiers, using explainable artificial intelligence (XAI). The method operates as follows: first, various XAI techniques and classifiers are combined and scored based on their explanations during the validation step. Then, the best combination of classifier and XAI technique for each disturbance is used on the testing set, such that the classifier outputs are more transparent. To accomplish these steps, a definition for the correct explanation in PQD is given. Also, to determine the quality of an explanation for a certain output, we propose an evaluation process to measure the explainability score for each XAI technique and classifier. By means of this approach, the PQD classifier outputs are optimized to be both accurate and easy to understand, allowing experts to make informed and trustworthy decisions.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据