4.6 Article

Toward Interpretable-AI Policies Using Evolutionary Nonlinear Decision Trees for Discrete-Action Systems

Journal

IEEE TRANSACTIONS ON CYBERNETICS
Volume -, Issue -, Pages -

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCYB.2022.3180664

Keywords

Artificial intelligence; Task analysis; Optimization; Automobiles; Training; Reinforcement learning; Boolean functions; Bilevel; interpretable; nonlinear decision tree (NLDT); reinforcement learning (RL)

Funding

  1. Ford-MSU Alliance Project

Ask authors/readers for more resources

This article proposes a nonlinear decision-tree approach to approximate and explain the control rules of a pretrained black-box deep reinforcement learning agent. The approach uses nonlinear optimization and a hierarchical structure to find simple and interpretable rules while maintaining comparable closed-loop performance.
Black-box artificial intelligence (AI) induction methods such as deep reinforcement learning (DRL) are increasingly being used to find optimal policies for a given control task. Although policies represented using a black-box AI are capable of efficiently executing the underlying control task and achieving optimal closed-loop performance-controlling the agent from the initial time step until the successful termination of an episode, the developed control rules are often complex and neither interpretable nor explainable. In this article, we use a recently proposed nonlinear decision-tree (NLDT) approach to find a hierarchical set of control rules in an attempt to maximize the open-loop performance for approximating and explaining the pretrained black-box DRL (oracle) agent using the labeled state-action dataset. Recent advances in nonlinear optimization approaches using evolutionary computation facilitate finding a hierarchical set of nonlinear control rules as a function of state variables using a computationally fast bilevel optimization procedure at each node of the proposed NLDT. In addition, we propose a reoptimization procedure for enhancing the closed-loop performance of an already derived NLDT. We evaluate our proposed methodologies (open-and closed-loop NLDTs) on different control problems having multiple discrete actions. In all these problems, our proposed approach is able to find relatively simple and interpretable rules involving one to four nonlinear terms per rule, while simultaneously achieving on par closed-loop performance when compared to a trained black-box DRL agent. A postprocessing approach for simplifying the NLDT is also suggested. The obtained results are inspiring as they suggest the replacement of complicated black-box DRL policies involving thousands of parameters (making them noninterpretable) with relatively simple interpretable policies. The results are encouraging and motivating to pursue further applications of proposed approach in solving more complex control tasks.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available