期刊
NEURAL NETWORKS
卷 161, 期 -, 页码 228-241出版社
PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.neunet.2023.01.025
关键词
Efficient interpretability; Interpretable reinforcement learning; Saliency map
Although deep Reinforcement Learning (RL) has been successful in various tasks, interpretability remains a challenge in real-world applications. Existing saliency map approaches in the RL domain either lack real-time capability or fail to produce interpretable saliency maps. This work presents an approach, called Distillation with selective Input Gradient Regularization (DIGR), that combines policy distillation and input gradient regularization to generate saliency maps with high interpretability and computation efficiency. Experimental results on MiniGrid (Fetch Object), Atari (Breakout), and CARLA Autonomous Driving tasks demonstrate the importance and effectiveness of the proposed approach.
Although deep Reinforcement Learning (RL) has proven successful in a wide range of tasks, one challenge it faces is interpretability when applied to real-world problems. Saliency maps are frequently used to provide interpretability for deep neural networks. However, in the RL domain, existing saliency map approaches are either computationally expensive and thus cannot satisfy the real-time requirement of real-world scenarios or cannot produce interpretable saliency maps for RL policies. In this work, we propose an approach of Distillation with selective Input Gradient Regularization (DIGR) which uses policy distillation and input gradient regularization to produce new policies that achieve both high interpretability and computation efficiency in generating saliency maps. Our approach is also found to improve the robustness of RL policies to multiple adversarial attacks. We conduct experiments on three tasks, MiniGrid (Fetch Object), Atari (Breakout) and CARLA Autonomous Driving, to demonstrate the importance and effectiveness of our approach.(c) 2023 Published by Elsevier Ltd.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据