4.7 Article

Post-hoc explanation of black-box classifiers using confident itemsets

Journal

EXPERT SYSTEMS WITH APPLICATIONS
Volume 165, Issue -, Pages -

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.eswa.2020.113941

Keywords

Explainable artificial intelligence; Machine learning; Post-hoc explanation; Confident itemsets; Interpretability; Fidelity

Ask authors/readers for more resources

Black-box AI methods like deep neural networks are widely used for building predictive models, but their decisions are hard to trust due to hidden inner workings. Explainable Artificial Intelligence (XAI) systems aim to clarify this black-box process, with post-hoc XAI methods being commonly used for explanations.
Black-box Artificial Intelligence (AI) methods, e.g. deep neural networks, have been widely utilized to build predictive models that can extract complex relationships in a dataset and make predictions for new unseen data records. However, it is difficult to trust decisions made by such methods since their inner working and decision logic is hidden from the user. Explainable Artificial Intelligence (XAI) refers to systems that try to explain how a black-box AI model produces its outcomes. Post-hoc XAI methods approximate the behavior of a black-box by extracting relationships between feature values and the predictions. Perturbation-based and decision set methods are among commonly used post-hoc XAI systems. The former explanators rely on random perturbations of data records to build local or global linear models that explain individual predictions or the whole model. The latter explanators use those feature values that appear more frequently to construct a set of decision rules that produces the same outcomes as the target black-box. However, these two classes of XAI methods have some limitations. Random perturbations do not take into account the distribution of feature values in different subspaces, leading to misleading approximations. Decision sets only pay attention to frequent feature values and miss many important correlations between features and class labels that appear less frequently but accurately represent decision boundaries of the model. In this paper, we address the above challenges by proposing an explanation method named Confident Itemsets Explanation (CIE). We introduce confident itemsets, a set of feature values that are highly correlated to a specific class label. CIE utilizes confident itemsets to discretize the whole decision space of a model to smaller subspaces. Extracting important correlations between the features and the outcomes of the classifier in different subspaces, CIE produces instance-wise and class-wise explanations that accurately approximate the behavior of the target black-box. Conducting a set of experiments on various black-box classifiers, and different tabular and textual data classification tasks, we show that our CIE method performs better than the previous perturbation-based and rule-based explanators in terms of the descriptive accuracy (an improvement of 9.3%) and interpretability (an improvement of 8.8%) of the explanations. Subjective evaluations demonstrate that the users find the explanations of CIE more understandable and interpretable than those of the other comparison methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available