4.6 Article

Model tree methods for explaining deep reinforcement learning agents in real-time robotic applications

期刊

NEUROCOMPUTING
卷 515, 期 -, 页码 133-144

出版社

ELSEVIER
DOI: 10.1016/j.neucom.2022.10.014

关键词

Explainable artificial intelligence; Model trees; Reinforcement learning; Robotics

向作者/读者索取更多资源

This paper provides an overview and analysis of methods for building model trees to explain deep reinforcement learning agents solving robotics tasks. The study finds that multiple outputs are important for understanding the dependencies of output features, and introducing domain knowledge via a hierarchy among input features improves accuracy and speeds up the building process.
Deep reinforcement learning has shown useful in the field of robotics but the black-box nature of deep neural networks impedes the applicability of deep reinforcement learning agents for real-world tasks. This is addressed in the field of explainable artificial intelligence, by developing explanation methods that aim to explain such agents to humans. Model trees as surrogate models have proven useful for producing explanations for black-box models used in real-world robotic applications, in particular, due to their capability of providing explanations in real time. In this paper, we provide an overview and analysis of available methods for building model trees for explaining deep reinforcement learning agents solving robotics tasks. We find that multiple outputs are important for the model to be able to grasp the dependencies of coupled output features, i.e. actions. Additionally, our results indicate that introducing domain knowledge via a hierarchy among the input features during the building process results in higher accuracies and a faster building process. (c) 2022 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据