4.7 Article

Toward Human-in-the-Loop AI: Enhancing Deep Reinforcement Learning via Real-Time Human Guidance for Autonomous Driving

Journal

ENGINEERING
Volume 21, Issue -, Pages 75-91

Publisher

ELSEVIER
DOI: 10.1016/j.eng.2022.05.017

Keywords

Human-in-the-loop AI; Deep reinforcement learning; Human guidance; Autonomous driving

Ask authors/readers for more resources

Machine learning, with its limited intelligence, cannot replace humans in real-world applications due to its inability to handle various situations. It is important to introduce humans into the training loop of artificial intelligence (AI) to leverage their robustness and adaptability. In this study, a real-time human-guidance-based deep reinforcement learning (DRL) method is developed for autonomous driving policy training. With a novel control transfer mechanism, humans can intervene and correct the agent's actions in real time during training. The proposed method improves DRL efficiency and performance by fusing real-time human guidance actions into the training loop.
Due to its limited intelligence and abilities, machine learning is currently unable to handle various situations thus cannot completely replace humans in real-world applications. Because humans exhibit robustness and adaptability in complex scenarios, it is crucial to introduce humans into the training loop of artificial intelligence (AI), leveraging human intelligence to further advance machine learning algo-rithms. In this study, a real-time human-guidance-based (Hug)-deep reinforcement learning (DRL) method is developed for policy training in an end-to-end autonomous driving case. With our newly designed mechanism for control transfer between humans and automation, humans are able to intervene and correct the agent's unreasonable actions in real time when necessary during the model training pro-cess. Based on this human-in-the-loop guidance mechanism, an improved actor-critic architecture with modified policy and value networks is developed. The fast convergence of the proposed Hug-DRL allows real-time human guidance actions to be fused into the agent's training loop, further improving the effi-ciency and performance of DRL. The developed method is validated by human-in-the-loop experiments with 40 subjects and compared with other state-of-the-art learning approaches. The results suggest that the proposed method can effectively enhance the training efficiency and performance of the DRL algo-rithm under human guidance without imposing specific requirements on participants' expertise or experience.(c) 2022 THE AUTHORS. Published by Elsevier LTD on behalf of Chinese Academy of Engineering and Higher Education Press Limited Company. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available