4.4 Article Proceedings Paper

Value-based deep reinforcement learning for adaptive isolated intersection signal control

Journal

IET INTELLIGENT TRANSPORT SYSTEMS
Volume 12, Issue 9, Pages 1005-1010

Publisher

INST ENGINEERING TECHNOLOGY-IET
DOI: 10.1049/iet-its.2018.5170

Keywords

traffic engineering computing; learning (artificial intelligence); neural nets; road traffic; iterative methods; dynamic programming; value-based deep reinforcement learning; adaptive isolated intersection signal control; road network efficiency improvement; advanced traffic signal control methods; intelligent transportation systems; smart city; modern city; artificial intelligence; machine learning-based framework; deep Q-learning neural network; model-free technique; optimal discrete-time action selection problems; variable green time; traffic fluctuations; dynamic discount factor; iterative Bellman equation; biased action-value function estimation; VISSIM software; traffic arrival rates; traffic arrival patterns

Funding

  1. China Engineering Consultants, Inc. [06923]

Ask authors/readers for more resources

Under efficiency improvement of road networks by utilizing advanced traffic signal control methods, intelligent transportation systems intend to characterize a smart city. Recently, due to significant progress in artificial intelligence, machine learning-based framework of adaptive traffic signal control has been highly concentrated. In particular, deep Q-learning neural network is a model-free technique and can be applied to optimal action selection problems. However, setting variable green time is a key mechanism to reflect traffic fluctuations such that time steps need not be fixed intervals in reinforcement learning framework. In this study, the authors proposed a dynamic discount factor embedded in the iterative Bellman equation to prevent from a biased estimation of action-value function due to the effects of inconstant time step interval. Moreover, action is added to the input layer of the neural network in the training process, and the output layer is the estimated action-value for the denoted action. Then, the trained neural network can be used to generate action that leads to an optimal estimated value within a finite set as the agents' policy. The preliminary results show that the trained agent outperforms a fixed timing plan in all testing cases with reducing system total delay by 20%..

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.4
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available