4.4 Article

Deep Reinforcement Learning-Based Vehicle Driving Strategy to Reduce Crash Risks in Traffic Oscillations

期刊

TRANSPORTATION RESEARCH RECORD
卷 2674, 期 10, 页码 42-54

出版社

SAGE PUBLICATIONS INC
DOI: 10.1177/0361198120937976

关键词

-

资金

  1. National Natural Science Foundation of China [71871057]
  2. Fundamental Research Funds for the Central Universities [2242019R40060, 2242020K40056, 2242020K40063]

向作者/读者索取更多资源

The primary objective of this study is to propose a deep reinforcement learning-based driving strategy for individual vehicles to mitigate oscillations and optimize traffic safety in stop-and-go waves. A deep deterministic policy gradient (DDPG)-based driving strategy, which requires information that is directly obtained by in-vehicle sensors, is proposed for system performance optimization. Two typical scenarios were simulated based on simulation software (SUMO): (i) the leading vehicle slowed down according to real trajectory data to produce one oscillation; (ii) the leading vehicle conducted several abrupt decelerations with various degrees of disturbance to produce multiple oscillations. The DDPG agents interacted with the SUMO platform to determine the optimal acceleration of vehicles that can reduce crash risks in various stop-and-go waves. The results showed that the proposed DDPG-based driving strategy successfully reduced the crash risk by 68.9%-78.4%. Scenarios with different penetration rates of DDPG agents and in various flow rates were compared to test the effect of the proposed strategy. The DDPG-based driving strategy reduced crash risk more with the increase of penetration rate and this strategy performed better when applied in the scenario with a high traffic flow rate. The proposed strategy is compared with the adaptive cruise control and jam-absorbing driving strategies. Results showed the proposed strategy outperformed other oscillation mitigating strategies in reducing crash risks.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.4
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据