期刊
IET INTELLIGENT TRANSPORT SYSTEMS
卷 -, 期 -, 页码 -出版社
WILEY
DOI: 10.1049/itr2.12336
关键词
automated driving and intelligent vehicles; learning (artificial intelligence); non-linear control systems
In this work, a rule-constrained reinforcement learning (RCRL) control method is proposed to address the challenge of controlling an autonomous vehicle's unprotected left turn at an intersection. By training a reinforcement learning controller with rule constraints using the outcomes of the path planning module as a goal condition, the proposed approach can generate locally optimal trajectories that adjust to unpredictable situations, making it safer and more reliable than end-to-end learning.
Controlling an autonomous vehicle's unprotected left turn at an intersection is a challenging task. Traditional rule-based autonomous driving decision and control algorithms struggle to construct accurate and trustworthy mathematical models for such circumstances, owing to their considerable uncertainty and unpredictability. To overcome this problem, a rule-constrained reinforcement learning (RCRL) control method is proposed in this work for autonomous driving. To train a reinforcement learning controller with rule constraints, outcomes of the path planning module are used as a goal condition in the reinforcement learning framework. Since they include vehicle dynamics, the proposed approach is safer and more reliable compared to end-to-end learning, thereby ensuring that the generated trajectories are locally optimal while adjusting to unpredictable situations. In the experiments, a highly randomized two-way four-lane intersection is established based on the CARLA simulator to verify the effectiveness of the proposed RCRL control method. Accordingly, the results show that the proposed method can provide real-time safe planning and ensure high passing efficiency for autonomous vehicles in the unprotected left turn task.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据