Related references
Note: Only part of the references are listed.
Article
Computer Science, Artificial Intelligence
Xiubin Zhu et al.
Summary: Understanding the rationale behind machine learning predictions is crucial for building confidence and trust in intelligent systems. This study proposes a fuzzy local surrogate model to provide explanations for predictions and enhance interpretability of machine learning results. The model is composed of readable rules, making it highly interpretable for prediction interpretation. The proposed methodology offers a significant contribution to the interpretation of machine learning models and demonstrates high estimation accuracy in experimental studies.
IEEE TRANSACTIONS ON FUZZY SYSTEMS
(2023)
Article
Computer Science, Artificial Intelligence
Guillermo Fernandez et al.
Summary: Classification algorithms are popular for efficiently generating models to solve complex problems. However, black box models lack interpretability, making simpler algorithms like decision trees more attractive. In this work, we propose explanations for fuzzy decision trees that can mimic the behavior of complex classifiers. Our proposal includes factual and counterfactual explanations, as well as the concept of robust factual explanations.
IEEE TRANSACTIONS ON FUZZY SYSTEMS
(2022)
Proceedings Paper
Computer Science, Artificial Intelligence
Te Zhang et al.
Summary: Explainable Artificial Intelligence (XAI) is becoming increasingly important for better transparency and verifiability of AI systems. Mamdani fuzzy systems can provide explanations based on linguistic rules, which could be a potential pathway to XAI. However, existing counterfactual explanation methods focus mainly on correlation rather than causality, and do not specifically address fuzzy systems. In this paper, we propose a new rule generation framework, CF-MABLAR, for Mamdani fuzzy classification systems that approximates causal links between inputs and outputs. The CF rules generated not only offer basic counterfactual explanations but also articulate how changes in inputs can lead to different outputs, which is crucial for lay-user insight, verification, and sensitivity evaluation of XAI systems.
2022 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ-IEEE)
(2022)
Review
Physics, Multidisciplinary
Pantelis Linardatos et al.
Summary: Recent advances in artificial intelligence have led to widespread industrial adoption, with machine learning systems demonstrating superhuman performance. However, the complexity of these systems has made them difficult to explain, hindering their application in sensitive domains. Therefore, there is a renewed interest in the field of explainable artificial intelligence.
Article
Computer Science, Artificial Intelligence
Nadia Burkart et al.
JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH
(2021)
Article
Computer Science, Artificial Intelligence
Jerry M. Mendel et al.
Summary: This article discusses explainable artificial intelligence (XAI) for rule-based fuzzy systems, highlighting the importance of choosing antecedent membership function shapes for XAI. It provides a novel multi-step approach to obtain a simplified subset of rules, and offers a method to evaluate the quality of explanations.
IEEE TRANSACTIONS ON FUZZY SYSTEMS
(2021)
Article
Computer Science, Theory & Methods
Riccardo Guidotti et al.
ACM COMPUTING SURVEYS
(2019)
Article
Computer Science, Artificial Intelligence
Emran Saleh et al.
ARTIFICIAL INTELLIGENCE IN MEDICINE
(2018)