4.5 Article

Conclusive local interpretation rules for random forests

Related references

Note: Only part of the references are listed.
Article Computer Science, Artificial Intelligence

Evaluating XAI: A comparison of rule-based and example-based explanations

Jasper van der Waa et al.

Summary: The resurgence of Explainable AI is driven by advancements in Artificial Intelligence, but there is a lack of valid evaluations on the impact of different explanation styles on user experience and behavior. Rule-based and example-based explanations have effects on system understanding and persuasion in the context of diabetes self-management, but do not improve task performance.

ARTIFICIAL INTELLIGENCE (2021)

Article Chemistry, Multidisciplinary

gbt-HIPS: Explaining the Classifications of Gradient Boosted Tree Ensembles

Julian Hatwell et al.

Summary: gbt-HIPS is a novel heuristic method for explaining gradient boosted tree classification models by extracting a single classification rule from the ensemble of decision trees that make up the GBT model. It offers the best trade-off between coverage and precision, while also being demonstrably guarded against under- and over-fitting. Additionally, it provides counterfactual detail in accordance with widely accepted recommendations for what makes a good explanation.

APPLIED SCIENCES-BASEL (2021)

Article Computer Science, Artificial Intelligence

CHIRPS: Explaining random forest classification

Julian Hatwell et al.

ARTIFICIAL INTELLIGENCE REVIEW (2020)

Article Engineering, Industrial

Evaluation of patient safety culture using a random forest algorithm

Mecit Can Emre Simsekler et al.

RELIABILITY ENGINEERING & SYSTEM SAFETY (2020)

Proceedings Paper Computer Science, Artificial Intelligence

Classifying Different Stages of Parkinson's Disease Through Random Forests

Carlo Ricciardi et al.

XV MEDITERRANEAN CONFERENCE ON MEDICAL AND BIOLOGICAL ENGINEERING AND COMPUTING - MEDICON 2019 (2020)

Article Computer Science, Artificial Intelligence

From local explanations to global understanding with explainable AI for trees

Scott M. Lundberg et al.

NATURE MACHINE INTELLIGENCE (2020)

Article Computer Science, Software Engineering

iForest: Interpreting Random Forests via Visual Analytics

Xun Zhao et al.

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS (2019)

Article Computer Science, Artificial Intelligence

Interpreting tree ensembles with inTrees

Houtao Deng

INTERNATIONAL JOURNAL OF DATA SCIENCE AND ANALYTICS (2019)

Article Computer Science, Artificial Intelligence

Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

Cynthia Rudin

NATURE MACHINE INTELLIGENCE (2019)

Article Computer Science, Information Systems

Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)

Amina Adadi et al.

IEEE ACCESS (2018)

Proceedings Paper Computer Science, Information Systems

Transparent Tree Ensembles

Alexander Moore et al.

ACM/SIGIR PROCEEDINGS 2018 (2018)

Article Computer Science, Artificial Intelligence

Explaining prediction models and individual predictions with feature contributions

Erik Strumbelj et al.

KNOWLEDGE AND INFORMATION SYSTEMS (2014)

Article Computer Science, Artificial Intelligence

Modeling wine preferences by data mining from physicochemical properties

Paulo Cortez et al.

DECISION SUPPORT SYSTEMS (2009)

Article Statistics & Probability

PREDICTIVE LEARNING VIA RULE ENSEMBLES

Jerome H. Frieman et al.

ANNALS OF APPLIED STATISTICS (2008)

Article Computer Science, Theory & Methods

A tutorial on spectral clustering

Ulrike von Luxburg

STATISTICS AND COMPUTING (2007)

Article Computer Science, Interdisciplinary Applications

Stochastic gradient boosting

JH Friedman

COMPUTATIONAL STATISTICS & DATA ANALYSIS (2002)

Article Computer Science, Artificial Intelligence

Random forests

L Breiman

MACHINE LEARNING (2001)