Journal
2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)
Volume -, Issue -, Pages -Publisher
IEEE
DOI: 10.1109/IJCNN55064.2022.9892336
Keywords
generative model; neural network; deep learning; natural language processing; XAI
Ask authors/readers for more resources
This study proposes a generative XAI framework, INTERACTION, which presents explanations in two steps and achieves competitive results in experiments.
XAI with natural language processing aims to produce human-readable explanations as evidence for AI decision-making, which addresses explainability and transparency. However, from an HCI perspective, the current approaches only focus on delivering a single explanation, which fails to account for the diversity of human thoughts and experiences in language. This paper thus addresses this gap, by proposing a generative XAI framework, INTERACTION (explaIn aNd predicT thEn queRy with contextuAl CondiTional varIational autO-eNcoder). Our novel framework presents explanation in two steps: (step one) Explanation and Label Prediction; and (step two) Diverse Evidence Generation. We conduct intensive experiments with the Transformer architecture on a benchmark dataset, e-SNLI [1]. Our method achieves competitive or better performance against state-of-the-art baseline models on explanation generation (up to 4.7% gain in BLEU) and prediction (up to 4.4% gain in accuracy) in step one; it can also generate multiple diverse explanations in step two.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available