Journal
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS
Volume 21, Issue 2, Pages 735-748Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TITS.2019.2893683
Keywords
Autonomous vehicles; Vehicle dynamics; Decision making; Reinforcement learning; Topology; Road transportation; Germanium; Reinforcement learning; autonomous driving; coordination; coordination graph; multiagent learning
Categories
Funding
- National Natural Science Foundation of China [U1808206, 61751311, 61825305, 61572104]
- National Natural Science Foundation of Liaoning Province [U1808206]
- Ministry of Military Equipment Development of China [61403120203]
- Dalian High Level Talent Innovation Support Program [2017RQ008]
- Dalian Science and Technology Innovation Fund [2018J12GX046]
Ask authors/readers for more resources
Autonomous driving is one of the most important AI applications and has attracted extensive interest in recent years. A large number of studies have successfully applied reinforcement learning techniques in various aspects of autonomous driving, ranging from low-level control of driving maneuvers to higher level of strategic decision-making. However, comparatively less progress has been made in investigating how co-existing autonomous vehicles would interact with each other in a common environment and how reinforcement learning can be helpful in such situations by applying multiagent reinforcement learning techniques in the high-level strategic decision-making of the following or overtaking for a group of autonomous vehicles in highway scenarios. Learning to achieve coordination among vehicles in such situations is challenging due to the unique feature of vehicular mobility, which renders it infeasible to directly apply the existing coordinated learning approaches. To solve this problem, we propose using dynamic coordination graph to model the continuously changing topology during vehicles' interactions and come up with two basic learning approaches to coordinate the driving maneuvers for a group of vehicles. Several extension mechanisms are then presented to make these approaches workable in a more complex and realistic setting with any number of vehicles. The experimental evaluation has verified the benefits of the proposed coordinated learning approaches, compared with other approaches that learn without coordination or rely on some traditional mobility models based on some expert driving rules.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available