期刊
IET INTELLIGENT TRANSPORT SYSTEMS
卷 11, 期 7, 页码 417-423出版社
INST ENGINEERING TECHNOLOGY-IET
DOI: 10.1049/iet-its.2017.0153
关键词
gradient methods; learning (artificial intelligence); adaptive control; road traffic control; traffic engineering computing; control engineering computing; digital simulation; traffic light control; value-function-based reinforcement learning; deep neural network architectures; complex control problems; high-dimensional state space; action spaces; deep policy-gradient RL algorithm; value-function-based agent RL algorithms; traffic signal; traffic intersection; adaptive traffic light control agents; graphical traffic simulator; control signals; PG-based agent maps; optimal control; urban mobility traffic simulator; training process
Recent advances in combining deep neural network architectures with reinforcement learning (RL) techniques have shown promising potential results in solving complex control problems with high-dimensional state and action spaces. Inspired by these successes, in this study, the authors built two kinds of RL algorithms: deep policy-gradient (PG) and value-function-based agents which can predict the best possible traffic signal for a traffic intersection. At each time step, these adaptive traffic light control agents receive a snapshot of the current state of a graphical traffic simulator and produce control signals. The PG-based agent maps its observation directly to the control signal; however, the value-function-based agent first estimates values for all legal control signals. The agent then selects the optimal control action with the highest value. Their methods show promising results in a traffic network simulated in the simulation of urban mobility traffic simulator, without suffering from instability issues during the training process.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据