3.8 Proceedings Paper

Interpretable AI Agent Through Nonlinear Decision Trees for Lane Change Problem

出版社

IEEE
DOI: 10.1109/SSCI50451.2021.9659552

关键词

Decision trees; bi-level optimization; machine learning; reinforcement learning; autonomous vehicles

资金

  1. Ford-MSU Alliance project

向作者/读者索取更多资源

This paper explores the interpretability of DNN/RL systems by using NLDT framework, which simplifies the state-action logic and provides simplistic rules to explain the system's decisions. Applying this methodology to a mountain car control problem, the study derives analytical decision rules involving six critical cars and further simplifies them for English-like interpretation of the lane change problem.
The recent years have witnessed a surge in application of deep neural networks (DNNs) and reinforcement learning (RL) methods to various autonomous control systems and game playing problems. While they are capable of learning from real-world data and produce adequate actions to various state conditions, their internal complexity does not allow an easy way to provide an explanation for their actions. In this paper, we generate state-action pair data from a trained DNN/RL system and employ a previously proposed nonlinear decision tree (NLDT) framework to decipher hidden simplistic rule sets that interpret the working of DNN/RL systems. The complexity of the rule sets are controllable by the user. In essence, the inherent bi-level optimization procedure that finds the NLDTs is capable of reducing the complexities of the state-action logic to a minimalist and intrepretable level. Demonstrating the working principle of the NLDT method to a revised mountain car control problem, this paper applies the methodology to the lane changing problem involving six critical cars in front and rear in left, middle, and right lanes of a pilot car. NLDTs are derived to have simplistic relationships of 12 decision variables involving relative distances and velocities of the six critical cars. The derived analytical decision rules are then simplified further by using a symbolic analysis tool to provide English-like interpretation of the lane change problem. This study makes a scratch to the issue of interpretability of modern machine learning based tools and it now deserves further attention and applications to make the overall approach more integrated and effective.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据