4.6 Article

Multi-Class Fuzzy-LORE: A Method for Extracting Local and Counterfactual Explanations Using Fuzzy Decision Trees

Related references

Note: Only part of the references are listed.
Article Computer Science, Artificial Intelligence

Fuzzy Rule-Based Local Surrogate Models for Black-Box Model Explanation

Xiubin Zhu et al.

Summary: Understanding the rationale behind machine learning predictions is crucial for building confidence and trust in intelligent systems. This study proposes a fuzzy local surrogate model to provide explanations for predictions and enhance interpretability of machine learning results. The model is composed of readable rules, making it highly interpretable for prediction interpretation. The proposed methodology offers a significant contribution to the interpretation of machine learning models and demonstrates high estimation accuracy in experimental studies.

IEEE TRANSACTIONS ON FUZZY SYSTEMS (2023)

Article Computer Science, Artificial Intelligence

Factual and Counterfactual Explanations in Fuzzy Classification Trees

Guillermo Fernandez et al.

Summary: Classification algorithms are popular for efficiently generating models to solve complex problems. However, black box models lack interpretability, making simpler algorithms like decision trees more attractive. In this work, we propose explanations for fuzzy decision trees that can mimic the behavior of complex classifiers. Our proposal includes factual and counterfactual explanations, as well as the concept of robust factual explanations.

IEEE TRANSACTIONS ON FUZZY SYSTEMS (2022)

Proceedings Paper Computer Science, Artificial Intelligence

Counterfactual rule generation for fuzzy rule-based classification systems

Te Zhang et al.

Summary: Explainable Artificial Intelligence (XAI) is becoming increasingly important for better transparency and verifiability of AI systems. Mamdani fuzzy systems can provide explanations based on linguistic rules, which could be a potential pathway to XAI. However, existing counterfactual explanation methods focus mainly on correlation rather than causality, and do not specifically address fuzzy systems. In this paper, we propose a new rule generation framework, CF-MABLAR, for Mamdani fuzzy classification systems that approximates causal links between inputs and outputs. The CF rules generated not only offer basic counterfactual explanations but also articulate how changes in inputs can lead to different outputs, which is crucial for lay-user insight, verification, and sensitivity evaluation of XAI systems.

2022 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ-IEEE) (2022)

Review Physics, Multidisciplinary

Explainable AI: A Review of Machine Learning Interpretability Methods

Pantelis Linardatos et al.

Summary: Recent advances in artificial intelligence have led to widespread industrial adoption, with machine learning systems demonstrating superhuman performance. However, the complexity of these systems has made them difficult to explain, hindering their application in sensitive domains. Therefore, there is a renewed interest in the field of explainable artificial intelligence.

ENTROPY (2021)

Article Computer Science, Artificial Intelligence

A Survey on the Explainability of Supervised Machine Learning

Nadia Burkart et al.

JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH (2021)

Article Computer Science, Artificial Intelligence

Critical Thinking About Explainable AI (XAI) for Rule-Based Fuzzy Systems

Jerry M. Mendel et al.

Summary: This article discusses explainable artificial intelligence (XAI) for rule-based fuzzy systems, highlighting the importance of choosing antecedent membership function shapes for XAI. It provides a novel multi-step approach to obtain a simplified subset of rules, and offers a method to evaluate the quality of explanations.

IEEE TRANSACTIONS ON FUZZY SYSTEMS (2021)

Article Computer Science, Theory & Methods

A Survey of Methods for Explaining Black Box Models

Riccardo Guidotti et al.

ACM COMPUTING SURVEYS (2019)

Article Computer Science, Artificial Intelligence

Learning ensemble classifiers for diabetic retinopathy assessment

Emran Saleh et al.

ARTIFICIAL INTELLIGENCE IN MEDICINE (2018)