4.7 Article

Generating Actionable Interpretations from Ensembles of Decision Trees

Journal

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
Volume 33, Issue 4, Pages 1540-1553

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TKDE.2019.2945326

Keywords

Machine learning interpretability; Actionable feature tweaking; Recommending feature changes; Altering model predictions; Ensemble of decision trees

Funding

  1. MIUR under grant Dipartimenti di eccellenza 2018-2022 of the Department of Computer Science of Sapienza University

Ask authors/readers for more resources

The paper introduces a technique that utilizes feedback loop from decision tree ensembles to offer recommendations for transforming predicted instances. Experimental results show that the method is able to suggest changes to feature values that help interpret the rationale of model predictions.
Machine-learned models are often perceived as black boxes: they are given inputs and hopefully produce desired outputs. There are many circumstances, however, where human-interpretability is crucial to understand (i) why a model outputs a certain prediction on a given instance, (ii) which adjustable features of that instance should be modified, and finally (iii) how to alter a prediction when the mutated instance is input back to the model. In this paper, we present a technique that exploits the feedback loop originated from the internals of any ensemble of decision trees to offer recommendations for transforming a k-labelled predicted instance into a k'-labelled one (for any possible pair of class labels k, k'). Our proposed algorithm perturbs individual feature values of an instance, so as to change the original prediction output by the ensemble on the so-transformed instance. This is also achieved under two constraints: the cost and tolerance of transformation. Finally, we evaluate our approach on four distinct application domains: online advertising, healthcare, spam filtering, and handwritten digit recognition. Experiments confirm that our solution is able to suggest changes to feature values that help interpreting the rationale of model predictions, making it indeed useful in practice especially if implemented efficiently.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available