3.8 Proceedings Paper

Why? Why not? When? Visual Explanations of Agent Behaviour in Reinforcement Learning

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/PacificVis53943.2022.00020

Keywords

Human-centered computing; Visualization; Visualization techniques; Treemaps; Human-centered computing; Visualization; Visualization design and evaluation methods

Funding

  1. U.S. National Science Foundation [OAC-1934766]

Ask authors/readers for more resources

This paper introduces a visual analytics interface called PolicyExplainer, which allows users to directly query the reasoning behind the actions of a reinforcement learning agent. By visualizing the agent's states, policy, and rewards, PolicyExplainer provides explanations for the agent's decisions, promoting trust and understanding.
Reinforcement learning (RL) is used in many domains, including autonomous driving, robotics, stock trading, and video games. Unfortunately, the black box nature of RL agents, combined with legal and ethical considerations, makes it increasingly important that humans (including those are who not experts in RL) understand the reasoning behind the actions taken by an RL agent, particularly in safety-critical domains. To help address this challenge, we introduce PolicyExplainer, a visual analytics interface which lets the user directly query an autonomous agent. PolicyExplainer visualizes the states, policy, and expected future rewards for an agent, and supports asking and answering questions such as: Why take this action? Why not take this other action? When is this action taken? PolicyExplainer is designed based upon a domain analysis with RL researchers, and is evaluated via qualitative and quantitative assessments on a trio of domains: taxi navigation, a stack bot domain, and drug recommendation for HIV patients.We find that PolicyExplainer's visual approach promotes trust and understanding of agent decisions better than a state-of-the-art text-based explanation approach. Interviews with domain practitioners provide further validation for PolicyExplainer as applied to safety-critical domains. Our results help demonstrate how visualization-based approaches can be leveraged to decode the behavior of autonomous RL agents, particularly for RL non-experts.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available