4.7 Article

Physics-informed deep reinforcement learning-based integrated two-dimensional car-following control strategy for connected automated vehicles

Journal

KNOWLEDGE-BASED SYSTEMS
Volume 269, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.knosys.2023.110485

Keywords

Connected automated vehicles; Two-dimensional control; Deep reinforcement learning; Traffic oscillation dampening; Path tracking

Ask authors/readers for more resources

This study proposes an innovative integrated two-dimensional control strategy for connected automated vehicles, using deep reinforcement learning. The strategy efficiently controls the vehicles in terms of both stability-wise longitudinal control performance and accurate lateral path-tracking performance. The controller utilizes vehicle-to-everything communication and roadway geometry information, and applies a physics-informed DRL state fusion approach and reward function to better utilize the information and borrow the merits of control theory concepts. Simulated experiments validate the controller's accuracy and stability-wise performance in diverse traffic scenarios.
Connected automated vehicles (CAVs) are broadly recognized as next-generation transformative transportation technologies having great potential to improve traffic safety, efficiency, and stability. Efficiently controlling CAVs on two-dimensional curvilinear roadways to follow preceding vehicles is denoted as the two-dimensional car-following process, which is highly critical; this process is challenging to implement owing to the complexity and varied nature of driving environments. This study proposes an innovative integrated two-dimensional control strategy for CAVs based on deep reinforcement learning (DRL), which efficiently regulates the two-dimensional car-following process of CAVs in terms of both stability-wise longitudinal control performance and accurate lateral path-tracking performance. Within the control framework, each CAV can receive the surrounding information from downstream vehicles and roadway geometry based on vehicle-to-everything (V2X) communication. To better utilize this information, we designed a physics-informed DRL state fusion approach and reward function, which efficiently embeds prior physics knowledge and borrows the merits of the equilibrium and consensus concepts from the control theory. Given the physics-informed information, the DRL-based controller outputs the integrated control instructions for both longitudinal and lateral control. For training, we constructed a roadway with a set of varying curvatures and em-bedded the ground-truth vehicle trajectory datasets to more effectively capture the realistic variations in the roadway geometry and driving environment. To facilitate value function approximation and enhance the policy iteration process in training, the distributed proximal policy optimization (DPPO) algorithm was applied, owing to its balanced performance. A series of simulated experiments were conducted to validate the controller's lateral control accuracy and stability-wise oscillation dampening performance in diverse traffic scenarios, including extreme ones.(c) 2023 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available