4.5 Article

Learning the Car-following Behavior of Drivers Using Maximum Entropy Deep Inverse Reinforcement Learning

期刊

JOURNAL OF ADVANCED TRANSPORTATION
卷 2020, 期 -, 页码 -

出版社

WILEY-HINDAWI
DOI: 10.1155/2020/4752651

关键词

-

资金

  1. National Key R&D Program of China [2019YFB1600500]
  2. Changjiang Scholars and Innovative Research Team in University [IRT_17R95]
  3. National Natural Science Foundation of China [51775053, 51908054]
  4. Fundamental Research Funds for the Central Universities [300102228506]

向作者/读者索取更多资源

The present study proposes a framework for learning the car-following behavior of drivers based on maximum entropy deep inverse reinforcement learning. The proposed framework enables learning the reward function, which is represented by a fully connected neural network, from driving data, including the speed of the driver's vehicle, the distance to the leading vehicle, and the relative speed. Data from two field tests with 42 drivers are used. After clustering the participants into aggressive and conservative groups, the car-following data were used to train the proposed model, a fully connected neural network model, and a recurrent neural network model. Adopting the fivefold cross-validation method, the proposed model was proved to have the lowest root mean squared percentage error and modified Hausdorff distance among the different models, exhibiting superior ability for reproducing drivers' car-following behaviors. Moreover, the proposed model captured the characteristics of different driving styles during car-following scenarios. The learned rewards and strategies were consistent with the demonstrations of the two groups. Inverse reinforcement learning can serve as a new tool to explain and model driving behavior, providing references for the development of human-like autonomous driving models.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据