A Multi-phase Intersection Traffic Signal Control Strategy with Deep Reinforcement Learning

被引:0
|
作者
Li, Congcong [1 ]
Li, Yuan [1 ]
Liu, Guihua [2 ]
机构
[1] Southwest Jiaotong Univ, Sch Informat Sci & Technol, Chengdu, Sichuan, Peoples R China
[2] Chongqing Railway Transportat Grp Co Ltd, Operat Co 4, Chongqing, Peoples R China
基金
中国国家自然科学基金;
关键词
intersection; deep reinforcement learning; signal timing; styling; phase sequence;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, a deep reinforcement learning (DNQ) algorithm for multi-phase intersection traffic control is proposed for improves the capacity of the urban road intersections. Here, deep learning is applied for extracting the features of traffic tlovi to learn the Q-function of reinforcement learning. The denoising stacked autoencoders are considered to reduce the effects of abnormal data generated during system operation. Considering the connection between the signal timing scheme and the phase sequence, the DNQ algorithm is used to adjust the sequence of the signal phase according to the dynamic traffic characteristics of the intersection while realtime self-adaptive adjustment of the signal timing. Simulations in platform consisting of VISSIM and Python are applied to test the algorithm. The performance of the proposed method is comprehensively compared with a traditional algorithm with fixed or free phase sequence under different traffic demand. Simulation results suggest that the proposed method signifycantly reduces the delay in the intersection when compared to the alternative methods.
引用
收藏
页码:959 / 964
页数:6
相关论文
共 50 条
  • [1] Cooperative Multi-Intersection Traffic Signal Control Based on Deep Reinforcement Learning
    Huang, Rui
    Hu, Jianming
    Huo, Yusen
    Pei, Xin
    [J]. CICTP 2019: TRANSPORTATION IN CHINA-CONNECTING THE WORLD, 2019, : 2959 - 2970
  • [2] Cooperative Control for Multi-Intersection Traffic Signal Based on Deep Reinforcement Learning and Imitation Learning
    Huo, Yusen
    Tao, Qinghua
    Hu, Jianming
    [J]. IEEE ACCESS, 2020, 8 : 199573 - 199585
  • [3] A Regional Traffic Signal Control Strategy with Deep Reinforcement Learning
    Li, Congcong
    Yan, Fei
    Zhou, Yiduo
    Wu, Jia
    Wang, Xiaomin
    [J]. 2018 37TH CHINESE CONTROL CONFERENCE (CCC), 2018, : 7690 - 7695
  • [4] Deep Reinforcement Learning Based Strategy For Optimizing Phase Splits in Traffic Signal Control
    Yang, Huan
    Zhao, Han
    Wang, Yu
    Liu, Guoqiang
    Wang, Danwei
    [J]. 2022 IEEE 25TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2022, : 2329 - 2334
  • [5] Multi-agent Deep Reinforcement Learning collaborative Traffic Signal Control method considering intersection heterogeneity
    Bie, Yiming
    Ji, Yuting
    Ma, Dongfang
    [J]. TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2024, 164
  • [6] Unification of probabilistic graph model and deep reinforcement learning (UPGMDRL) for multi-intersection traffic signal control
    Sattarzadeh, Ali Reza
    Pathirana, Pubudu N.
    [J]. Knowledge-Based Systems, 2024, 305
  • [7] FedLight: Federated Reinforcement Learning for Autonomous Multi-Intersection Traffic Signal Control
    Ye, Yutong
    Zhao, Wupan
    Wei, Tongquan
    Hu, Shiyan
    Chen, Mingsong
    [J]. 2021 58TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2021, : 847 - 852
  • [8] Multi-agent deep reinforcement learning with traffic flow for traffic signal control
    Hou, Liang
    Huang, Dailin
    Cao, Jie
    Ma, Jialin
    [J]. JOURNAL OF CONTROL AND DECISION, 2023,
  • [9] Traffic Signal Control for An Isolated Intersection Using Reinforcement Learning
    Maiti, Nandan
    Chilukuri, Bhargava Rama
    [J]. 2021 INTERNATIONAL CONFERENCE ON COMMUNICATION SYSTEMS & NETWORKS (COMSNETS), 2021, : 629 - 633
  • [10] Learning Multi-Intersection Traffic Signal Control via Coevolutionary Multi-Agent Reinforcement Learning
    Chen, Wubing
    Yang, Shangdong
    Li, Wenbin
    Hu, Yujing
    Liu, Xiao
    Gao, Yang
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024,