4.7 Article

Explainability in Deep Reinforcement Learning, a Review into Current Methods and Applications

Related references

Note: Only part of the references are listed.
Article Computer Science, Theory & Methods

Explainable Deep Reinforcement Learning: State of the Art and Challenges

George A. Vouros

Summary: Interpretability, explainability, and transparency are crucial factors in the implementation of artificial intelligence methods in various critical domains. This article provides a review of state-of-the-art methods for explainable deep reinforcement learning, with a focus on meeting the needs of human operators.

ACM COMPUTING SURVEYS (2023)

Article Computer Science, Information Systems

Reinforcement Learning-Based Physical Cross-Layer Security and Privacy in 6G

Xiaozhen Lu et al.

Summary: Sixth-generation (6G) cellular systems are vulnerable to PHY-layer attacks and privacy leakage due to large-scale networks and time-sensitive applications. Optimized security schemes suffer performance degradation in 6G systems, and reinforcement learning (RL) algorithms can enhance security against smart attacks without relying on attack models. This article provides a comprehensive survey on RL-based 6G PHY cross-layer security and privacy protection.

IEEE COMMUNICATIONS SURVEYS AND TUTORIALS (2023)

Article Computer Science, Artificial Intelligence

Collective eXplainable AI: Explaining Cooperative Strategies and Agent Contribution in Multiagent Reinforcement Learning With Shapley Values

Alexandre Heuillet et al.

Summary: This study proposes a novel approach using Shapley values to explain cooperative strategies in multi-agent RL and tests its effectiveness in cooperation-centered environments. Experimental results demonstrate that Shapley values can successfully estimate the contribution of each agent.

IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE (2022)

Article Computer Science, Software Engineering

Event-driven temporal models for explanations-ETeMoX: explaining reinforcement learning

Juan Marcelo Parra-Ullauri et al.

Summary: The increasing autonomy in modern software systems, especially in the context of Reinforcement Learning, raises concerns about the transparency of decision-making criteria, requiring solutions for explainability and trustworthiness in AI systems.

SOFTWARE AND SYSTEMS MODELING (2022)

Article Computer Science, Cybernetics

Explainable AI in Deep Reinforcement Learning Models for Power System Emergency Control

Ke Zhang et al.

Summary: This article discusses the interpretability issue in DRL models for power system emergency control, and proposes a Deep-SHAP method based on SHAPs to explain the decision-making process of the DRL model, ensuring trustworthiness and transparency in model decisions.

IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS (2022)

Article Computer Science, Artificial Intelligence

MoET: Mixture of Expert Trees and its application to verifiable reinforcement learning

Marko Vasic et al.

Summary: Rapid advancements in deep learning have led to many breakthroughs, but there are limitations when applying them in safety critical settings. Our proposed model, MoET, combines experts and gating functions to achieve better performance and verifiability.

NEURAL NETWORKS (2022)

Article Automation & Control Systems

Toward Interpretable-AI Policies Using Evolutionary Nonlinear Decision Trees for Discrete-Action Systems

Yashesh Dhebar et al.

Summary: This article proposes a nonlinear decision-tree approach to approximate and explain the control rules of a pretrained black-box deep reinforcement learning agent. The approach uses nonlinear optimization and a hierarchical structure to find simple and interpretable rules while maintaining comparable closed-loop performance.

IEEE TRANSACTIONS ON CYBERNETICS (2022)

Article Computer Science, Artificial Intelligence

Explaining Deep Learning Models Through Rule-Based Approximation and Visualization

Eduardo Soares et al.

Summary: This article introduces a novel approach to developing explainable machine learning models by approximating a deep reinforcement learning model with IF-THEN rules and enhancing interpretability through visualizing rules. Experimental results demonstrate the effective interpretability of specific DRL agents and the potential extension to a broader set of deep neural network models.

IEEE TRANSACTIONS ON FUZZY SYSTEMS (2021)

Proceedings Paper Automation & Control Systems

XAI-N: Sensor-based Robot Navigation using Expert Policies and Decision Trees

Aaron M. Roth et al.

Summary: This research introduces a sensor-based learning navigation algorithm using deep reinforcement learning, and enhances robot navigation performance in dynamic environments by analyzing and improving the policy in decision tree format.

2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) (2021)

Proceedings Paper Automation & Control Systems

Explainable AI methods on a deep reinforcement learning agent for automatic docking

Jakob Lover et al.

Summary: This paper implements three XAI methods to explain the decisions made by a deep reinforcement learning agent in automatic docking on a fully-actuated vessel, addressing challenges related to interpretability and trustworthiness of ANNs. The authors discuss the properties and suitability of the three methods, juxtaposing them with important attributes of the docking agent.

IFAC PAPERSONLINE (2021)

Proceedings Paper Computer Science, Artificial Intelligence

Explainable Reinforcement Learning for Longitudinal Control

Roman Liessner et al.

Summary: This paper demonstrates the application of SHAP values to interpret the decisions made by a DRL agent in an autonomous driving scenario, showing that the RL-SHAP representation is effective in providing insights into the learned action-selection policy. By combining learned actions and SHAP values, it is possible to observe the effects of different features on the agent's decisions at each time step, and identify which influences are significant.

ICAART: PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE - VOL 2 (2021)

Article Engineering, Aerospace

Coactive design of explainable agent-based task planning and deep reinforcement learning for human-UAVs teamwork

Chang Wang et al.

CHINESE JOURNAL OF AERONAUTICS (2020)

Article Robotics

Deep Reinforcement Learning for Safe Local Planning of a Ground Vehicle in Unknown Rough Terrain

Shirel Josef et al.

IEEE ROBOTICS AND AUTOMATION LETTERS (2020)

Proceedings Paper Engineering, Electrical & Electronic

Explainability of Intelligent Transportation Systems using Knowledge Compilation: a Traffic Light Controller Case

Salomon Wollenstein-Betech et al.

2020 IEEE 23RD INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC) (2020)