4.8 Article

Fuzzy Rule-Based Local Surrogate Models for Black-Box Model Explanation

Related references

Note: Only part of the references are listed.
Article Computer Science, Artificial Intelligence

CaSE: Explaining Text Classifications by Fusion of Local Surrogate Explanation Models with Contextual and Semantic Knowledge

Sebastian Kiefer

Summary: CaSE proposes a new explanation architecture that overcomes the drawbacks of traditional explanation techniques by using semantic feature arrangements or semantic interrogations, capable of generating more meaningful and coherent explanations. By combining knowledge from unsupervised topic models, CaSE can generate understandable explanations for any text classifier and provide high-quality explanations to humans, thus improving the basis for interactive machine learning.

INFORMATION FUSION (2022)

Article Computer Science, Artificial Intelligence

IFC-BD: An Interpretable Fuzzy Classifier for Boosting Explainable Artificial Intelligence in Big Data

Fatemeh Aghaeipoor et al.

Summary: This article introduces an interpretable fuzzy classifier for Big Data, aiming to boost explainability by learning a compact yet accurate fuzzy model. Developed in a cell-based distributed framework, IFC-BD goes through three working stages of initial rule learning, rule generalization, and heuristic rule selection to move from a high number of specific rules to fewer, more general and confident rules. The proposed algorithm was found to improve the explainability and predictive performance of fuzzy rule-based classifiers in comparison to state-of-the-art approaches.

IEEE TRANSACTIONS ON FUZZY SYSTEMS (2022)

Article Computer Science, Artificial Intelligence

A novel model usability evaluation framework (MUsE) for explainable artificial intelligence

Juergen Dieber et al.

Summary: The passage discusses the importance of understanding the decision-making process of complex machine learning models and the increasing interest in explainable artificial intelligence tools. It also explores the effectiveness of the LIME xAI framework in making tabular models more interpretable through various assessments.

INFORMATION FUSION (2022)

Article Computer Science, Artificial Intelligence

Factual and Counterfactual Explanations in Fuzzy Classification Trees

Guillermo Fernandez et al.

Summary: Classification algorithms are popular for efficiently generating models to solve complex problems. However, black box models lack interpretability, making simpler algorithms like decision trees more attractive. In this work, we propose explanations for fuzzy decision trees that can mimic the behavior of complex classifiers. Our proposal includes factual and counterfactual explanations, as well as the concept of robust factual explanations.

IEEE TRANSACTIONS ON FUZZY SYSTEMS (2022)

Article Automation & Control Systems

Identification of Fuzzy Rule-Based Models With Collaborative Fuzzy Clustering

Xingchen Hu et al.

Summary: The study aims to address privacy concerns when all necessary data for building FRBMs cannot be obtained, by utilizing collaborative fuzzy clustering to facilitate sharing, exchanging, and utilizing information between input and output spaces of a system, ultimately enhancing model performance.

IEEE TRANSACTIONS ON CYBERNETICS (2022)

Article Automation & Control Systems

A Granular Approach to Interval Output Estimation for Rule-Based Fuzzy Models

Xiubin Zhu et al.

Summary: This study elaborates on the realization of granular outputs for rule-based fuzzy models to effectively quantify modeling errors. The resulting granular model combines a regression model and an error model, with information granularity playing a central role. The quality of the produced interval estimates is evaluated using coverage and specificity criteria, and the optimal allocation of information granularity is determined.

IEEE TRANSACTIONS ON CYBERNETICS (2022)

Article Computer Science, Artificial Intelligence

Horizontal Federated Learning of Takagi-Sugeno Fuzzy Rule-Based Models

Xiubin Zhu et al.

Summary: This article elaborates on the design and implementation of a fuzzy rule-based model in the horizontal federated learning framework, proposing a two-step federated learning approach to train a global model while protecting data privacy with high accuracy.

IEEE TRANSACTIONS ON FUZZY SYSTEMS (2022)

Article Economics

Causal Interpretations of Black-Box Models

Qingyuan Zhao et al.

Summary: The fields of machine learning and causal inference have developed concepts, tools, and theory for each other. Extracting causal interpretations from black-box models requires a model with good predictive performance, domain knowledge, and suitable visualization tools.

JOURNAL OF BUSINESS & ECONOMIC STATISTICS (2021)

Article Computer Science, Artificial Intelligence

GLocalX - From Local to Global Explanations of Black Box AI Models

Mattia Setzu et al.

Summary: Artificial Intelligence (AI) is widely used in various aspects of society, including in complex tasks where machine learning models show remarkable accuracy but lack interpretability. GLocalX is a local-first explanation method that aggregates local explanations to provide insight into black box models.

ARTIFICIAL INTELLIGENCE (2021)

Article Computer Science, Artificial Intelligence

CASTLE: Cluster-aided space transformation for local explanations

Valerio La Gatta et al.

Summary: In this paper, a novel model-agnostic Explainable AI technique named CASTLE is proposed to provide rule-based explanations based on both the local and global model's workings. The framework has been evaluated on six datasets in terms of temporal efficiency, cluster quality and model significance, showing a 6% increase in interpretability compared to another state-of-the-art technique, Anchors.

EXPERT SYSTEMS WITH APPLICATIONS (2021)

Article Computer Science, Artificial Intelligence

Post-hoc explanation of black-box classifiers using confident itemsets

Milad Moradi et al.

Summary: Black-box AI methods like deep neural networks are widely used for building predictive models, but their decisions are hard to trust due to hidden inner workings. Explainable Artificial Intelligence (XAI) systems aim to clarify this black-box process, with post-hoc XAI methods being commonly used for explanations.

EXPERT SYSTEMS WITH APPLICATIONS (2021)

Article Computer Science, Artificial Intelligence

Generating Actionable Interpretations from Ensembles of Decision Trees

Gabriele Tolomei et al.

Summary: The paper introduces a technique that utilizes feedback loop from decision tree ensembles to offer recommendations for transforming predicted instances. Experimental results show that the method is able to suggest changes to feature values that help interpret the rationale of model predictions.

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING (2021)

Article Computer Science, Artificial Intelligence

What is a Tabby? Interpretable Model Decisions by Learning Attribute-Based Classification Criteria

Haomiao Liu et al.

Summary: The proposed interpretable Hierarchical Criteria Network (HCN) aims to make classification models more understandable by learning explicit hierarchical criteria. The results show that HCN can learn meaningful attributes and reasonable interpretable classification criteria, providing further human feedback for model correction.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2021)

Article Computer Science, Artificial Intelligence

Classification by ordinal sums of conjunctive and disjunctive functions for explainable AI and interpretable machine learning solutions

Miroslav Hudec et al.

Summary: The study introduced a novel classification method that empowers domain experts to choose important observations for attributes and utilizes function variability for machine learning opportunities. Demonstrated the research steps of human-in-the-loop interactive machine learning with aggregation functions.

KNOWLEDGE-BASED SYSTEMS (2021)

Article Computer Science, Information Systems

Demystifying Thermal Comfort in Smart Buildings: An Interpretable Machine Learning Approach

Wei Zhang et al.

Summary: Thermal comfort is a crucial consideration in smart buildings, and an interpretable thermal comfort system can help understand the complexity of comfort models and optimize building systems. By studying the impact of features on comfort and creating interpretable model surrogates, we can uncover model mechanisms and provide more intuitive information for smart building applications.

IEEE INTERNET OF THINGS JOURNAL (2021)

Article Computer Science, Artificial Intelligence

Using ontologies to enhance human understandability of global post-hoc explanations of black-box models

Roberto Confalonieri et al.

Summary: The use of ontologies can enhance the understandability of global post-hoc explanations, particularly when combined with domain knowledge. Results show that decision trees generated with ontologies are more understandable than standard models without significant compromise on accuracy.

ARTIFICIAL INTELLIGENCE (2021)

Article Biology

Interpretable heartbeat classification using local model-agnostic explanations on ECGs

Ines Neves et al.

Summary: This study focuses on an Explainable Artificial Intelligence (XAI) solution to improve the interpretability of heartbeat classification. By introducing a novel method that adds temporal dependency between time samples in time series data, the classification explanation becomes more reliable and trustworthy.

COMPUTERS IN BIOLOGY AND MEDICINE (2021)

Article Computer Science, Artificial Intelligence

Designing Distributed Fuzzy Rule-Based Models

Ye Cui et al.

Summary: This study proposes the construction of distributed fuzzy rule-based models, aggregating results from low-dimensional models through linear transformations, which shows tangible benefits over constructing monolithic rule-based models. Experimental results indicate that 1-D rule-based models with optimal linkage matrix have improved accuracy and computing costs by an average of 43.46% and 98.85% respectively.

IEEE TRANSACTIONS ON FUZZY SYSTEMS (2021)

Article Computer Science, Artificial Intelligence

Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI

Andreas Holzinger et al.

Summary: AI excels in certain tasks but humans excel at multi-modal thinking and building self-explanatory systems. The medical domain highlights the importance of various modalities contributing to one result. Using conceptual knowledge to guide model training can lead to more explainable, robust, and less biased machine learning models.

INFORMATION FUSION (2021)

Article Computer Science, Artificial Intelligence

A randomization mechanism for realizing granular models in distributed system modeling

Dan Wang et al.

Summary: This study aims to optimize the performance of granular models by adjusting local sources of knowledge to form consensus, reflecting and quantifying the diversity of local knowledge, through an active aggregation mechanism.

KNOWLEDGE-BASED SYSTEMS (2021)

Review Computer Science, Artificial Intelligence

Explainable artificial intelligence: an analytical review

Plamen P. Angelov et al.

Summary: This paper provides a brief analytical review of the current state-of-the-art in explainability of artificial intelligence, discussing historical context, main challenges, recent methods, and future research directions.

WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY (2021)

Article Computer Science, Artificial Intelligence

A Survey on Neural Network Interpretability

Yu Zhang et al.

Summary: This study provides a comprehensive review of the interpretability of neural networks, clarifies the definition, and proposes a new taxonomy. The trust in deep learning systems is affected by the interpretability issue, which is also related to ethical problems. The interpretability of deep networks is a desired property for becoming powerful tools in other research fields.

IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE (2021)

Article Computer Science, Artificial Intelligence

Learning With Interpretable Structure From Gated RNN

Bo-Jian Hou et al.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2020)

Article Computer Science, Artificial Intelligence

Measuring the Quality of Explanations: The System Causability Scale (SCS) Comparing Human and Machine Explanations

Andreas Holzinger et al.

KUNSTLICHE INTELLIGENZ (2020)

Article Computer Science, Artificial Intelligence

Interpretable Deep Convolutional Fuzzy Classifier

Mojtaba Yeganejou et al.

IEEE TRANSACTIONS ON FUZZY SYSTEMS (2020)

Article Computer Science, Theory & Methods

Fuzzy rule-based models with randomized development mechanisms

Xingchen Hu et al.

FUZZY SETS AND SYSTEMS (2019)

Article Computer Science, Artificial Intelligence

Explanation in artificial intelligence: Insights from the social sciences

Tim Miller

ARTIFICIAL INTELLIGENCE (2019)

Article Computer Science, Artificial Intelligence

Evolutionary Fuzzy Systems for Explainable Artificial Intelligence: Why, When, What for, and Where to?

Alberto Fernandez et al.

IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE (2019)

Editorial Material Robotics

XAI-Explainable artificial intelligence

David Gunning et al.

SCIENCE ROBOTICS (2019)

Article Computer Science, Artificial Intelligence

Granular Models and Granular Outliers

Xiubin Zhu et al.

IEEE TRANSACTIONS ON FUZZY SYSTEMS (2018)

Article Computer Science, Artificial Intelligence

A Design of Granular Takagi-Sugeno Fuzzy Model Through the Synergy of Fuzzy Subspace Clustering and Optimal Allocation of Information Granularity

Xiubin Zhu et al.

IEEE TRANSACTIONS ON FUZZY SYSTEMS (2018)

Article Computer Science, Hardware & Architecture

The Mythos of Model Interpretability

Zachary C. Lipton

COMMUNICATIONS OF THE ACM (2018)

Article Computer Science, Artificial Intelligence

Granular Encoders and Decoders: A Study in Processing Information Granules

Xiubin Zhu et al.

IEEE TRANSACTIONS ON FUZZY SYSTEMS (2017)

Article Computer Science, Artificial Intelligence

Reinforced rule-based fuzzy models: Design and analysis

Eun-Hu Kim et al.

KNOWLEDGE-BASED SYSTEMS (2017)

Article Computer Science, Theory & Methods

Does machine learning need fuzzy logic?

Eyke Huellermeier

FUZZY SETS AND SYSTEMS (2015)