期刊
CMC-COMPUTERS MATERIALS & CONTINUA
卷 71, 期 2, 页码 2225-2247出版社
TECH SCIENCE PRESS
DOI: 10.32604/cmc.2022.022952
关键词
Artificial intelligence; traffic light control; traffic disruptions; multi-agent deep Q-network; deep reinforcement learning
资金
- Research Creativity and Management Office, Universiti Sains Malaysia
This paper investigates the use of multi-agent deep Q-network (MADQN) to address the curse of dimensionality issue in traditional multi-agent reinforcement learning (MARL), and conducts case studies on real traffic networks and grid traffic networks. The results show that the MADQN scheme has a significant effect on traffic signal control.
This paper investigates the use of multi-agent deep Q-network (MADQN) to address the curse of dimensionality issue occurred in the traditional multi-agent reinforcement learning (MARL) approach. The proposed MADQN is applied to traffic light controllers at multiple intersections with busy traffic and traffic disruptions, particularly rainfall. MADQN is based on deep Q-network (DQN), which is an integration of the traditional reinforcement learning (RL) and the newly emerging deep learning (DL) approaches. MADQN enables traffic light controllers to learn, exchange knowledge with neighboring agents, and select optimal joint actions in a collaborative manner. A case study based on a real traffic network is conducted as part of a sustainable urban city project in the Sunway City of Kuala Lumpur in Malaysia. Investigation is also performed using a grid traffic network (GTN) to understand that the proposed scheme is effective in a traditional traffic network. Our proposed scheme is evaluated using two simulation tools, namely Matlab and Simulation of Urban Mobility (SUMO). Our proposed scheme has shown that the cumulative delay of vehicles can be reduced by up to 30% in the simulations.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据