Related references
Note: Only part of the references are listed.
Article
Computer Science, Theory & Methods
George A. Vouros
Summary: Interpretability, explainability, and transparency are crucial factors in the implementation of artificial intelligence methods in various critical domains. This article provides a review of state-of-the-art methods for explainable deep reinforcement learning, with a focus on meeting the needs of human operators.
ACM COMPUTING SURVEYS
(2023)
Article
Computer Science, Information Systems
Xiaozhen Lu et al.
Summary: Sixth-generation (6G) cellular systems are vulnerable to PHY-layer attacks and privacy leakage due to large-scale networks and time-sensitive applications. Optimized security schemes suffer performance degradation in 6G systems, and reinforcement learning (RL) algorithms can enhance security against smart attacks without relying on attack models. This article provides a comprehensive survey on RL-based 6G PHY cross-layer security and privacy protection.
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS
(2023)
Article
Computer Science, Artificial Intelligence
Alexandre Heuillet et al.
Summary: This study proposes a novel approach using Shapley values to explain cooperative strategies in multi-agent RL and tests its effectiveness in cooperation-centered environments. Experimental results demonstrate that Shapley values can successfully estimate the contribution of each agent.
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
(2022)
Article
Computer Science, Software Engineering
Juan Marcelo Parra-Ullauri et al.
Summary: The increasing autonomy in modern software systems, especially in the context of Reinforcement Learning, raises concerns about the transparency of decision-making criteria, requiring solutions for explainability and trustworthiness in AI systems.
SOFTWARE AND SYSTEMS MODELING
(2022)
Article
Computer Science, Cybernetics
Ke Zhang et al.
Summary: This article discusses the interpretability issue in DRL models for power system emergency control, and proposes a Deep-SHAP method based on SHAPs to explain the decision-making process of the DRL model, ensuring trustworthiness and transparency in model decisions.
IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS
(2022)
Article
Computer Science, Artificial Intelligence
Marko Vasic et al.
Summary: Rapid advancements in deep learning have led to many breakthroughs, but there are limitations when applying them in safety critical settings. Our proposed model, MoET, combines experts and gating functions to achieve better performance and verifiability.
Article
Automation & Control Systems
Yashesh Dhebar et al.
Summary: This article proposes a nonlinear decision-tree approach to approximate and explain the control rules of a pretrained black-box deep reinforcement learning agent. The approach uses nonlinear optimization and a hierarchical structure to find simple and interpretable rules while maintaining comparable closed-loop performance.
IEEE TRANSACTIONS ON CYBERNETICS
(2022)
Article
Computer Science, Artificial Intelligence
Eduardo Soares et al.
Summary: This article introduces a novel approach to developing explainable machine learning models by approximating a deep reinforcement learning model with IF-THEN rules and enhancing interpretability through visualizing rules. Experimental results demonstrate the effective interpretability of specific DRL agents and the potential extension to a broader set of deep neural network models.
IEEE TRANSACTIONS ON FUZZY SYSTEMS
(2021)
Proceedings Paper
Automation & Control Systems
Aaron M. Roth et al.
Summary: This research introduces a sensor-based learning navigation algorithm using deep reinforcement learning, and enhances robot navigation performance in dynamic environments by analyzing and improving the policy in decision tree format.
2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)
(2021)
Proceedings Paper
Automation & Control Systems
Jakob Lover et al.
Summary: This paper implements three XAI methods to explain the decisions made by a deep reinforcement learning agent in automatic docking on a fully-actuated vessel, addressing challenges related to interpretability and trustworthiness of ANNs. The authors discuss the properties and suitability of the three methods, juxtaposing them with important attributes of the docking agent.
Proceedings Paper
Computer Science, Artificial Intelligence
Roman Liessner et al.
Summary: This paper demonstrates the application of SHAP values to interpret the decisions made by a DRL agent in an autonomous driving scenario, showing that the RL-SHAP representation is effective in providing insights into the learned action-selection policy. By combining learned actions and SHAP values, it is possible to observe the effects of different features on the agent's decisions at each time step, and identify which influences are significant.
ICAART: PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE - VOL 2
(2021)
Article
Engineering, Aerospace
Chang Wang et al.
CHINESE JOURNAL OF AERONAUTICS
(2020)
Article
Robotics
Shirel Josef et al.
IEEE ROBOTICS AND AUTOMATION LETTERS
(2020)
Proceedings Paper
Engineering, Electrical & Electronic
Salomon Wollenstein-Betech et al.
2020 IEEE 23RD INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC)
(2020)