PhaseLight: An Universal and Practical Traffic Signal Control Algorithms Based on Reinforcement Learning

被引:0
|
作者
Wu, Zhikai [1 ]
Hu, Jianming [1 ,2 ]
机构
[1] Tsinghua Univ, Dept Automat, Beijing 100084, Peoples R China
[2] Beijing Natl Res Ctr Informat Sci & Technol, Beijing 100084, Peoples R China
关键词
D O I
10.1109/ITSC57777.2023.10422109
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Traffic signal control(TSC) has become an important issue for urban traffic management. Recent studies utilize reinforcement learning(RL) in traffic signal control since it has the advantages of no requirement for prior knowledge and real-time control. However, these studies mainly focused on control performance of training scenarios, but ignore the adaptability to different intersection topologies and flow distributions. Furthermore, most studies employed an impractical phase-selection scheme with an unfixed phase order that may confuse human drivers. To address these issues, we propose PhaseLight, a method combining lane-based representation, sophisticated network structure and advanced reinforcement learning algorithm. It is capable of adapting to various intersection topologies and flow distributions without additional training. Meanwhile, it employs a phase-switching scheme to improve practicality with little performance loss. Comprehensive experiment are conducted using the Simulation of Urban MObility(SUMO) simulator. The results in both training and testing scenarios demonstrate the effectiveness of PhaseLight, indicate its potential in real-world applications.
引用
收藏
页码:4738 / 4743
页数:6
相关论文
共 50 条
  • [41] Adaptive Traffic Signal Control Model on Intersections Based on Deep Reinforcement Learning
    Li, Duowei
    Wu, Jianping
    Xu, Ming
    Wang, Ziheng
    Hu, Kezhen
    JOURNAL OF ADVANCED TRANSPORTATION, 2020, 2020
  • [42] Reinforcement learning based traffic signal control considering the railway information in Japan
    Mei, Xutao
    Fukushima, Nijiro
    Yang, Bo
    Wang, Zheng
    Takata, Tetsuya
    Nagasawa, Hiroyuki
    Nakano, Kimihiko
    2023 IEEE 26TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS, ITSC, 2023, : 3533 - 3538
  • [43] A Stochastic Traffic Flow Model-Based Reinforcement Learning Framework For Advanced Traffic Signal Control
    Zhu, Yifan
    Lv, Yisheng
    Lin, Shu
    Xu, Jungang
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2025, 26 (01) : 714 - 723
  • [44] Reinforcement Learning for Traffic Signal Control in Hybrid Action Space
    Luo, Haoqing
    Bie, Yiming
    Jin, Sheng
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (06) : 5225 - 5241
  • [45] A Regional Traffic Signal Control Strategy with Deep Reinforcement Learning
    Li, Congcong
    Yan, Fei
    Zhou, Yiduo
    Wu, Jia
    Wang, Xiaomin
    2018 37TH CHINESE CONTROL CONFERENCE (CCC), 2018, : 7690 - 7695
  • [46] A Deep Reinforcement Learning Approach for Fair Traffic Signal Control
    Raeis, Majid
    Leon-Garcia, Alberto
    2021 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2021, : 2512 - 2518
  • [47] Implementing Traffic Signal Optimal Control by Multiagent Reinforcement Learning
    Song, Jiong
    Jin, Zhao
    Zhu, WenJun
    2011 INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND NETWORK TECHNOLOGY (ICCSNT), VOLS 1-4, 2012, : 2578 - 2582
  • [48] Multi-agent Reinforcement Learning for Traffic Signal Control
    Prabuchandran, K. J.
    Kumar, Hemanth A. N.
    Bhatnagar, Shalabh
    2014 IEEE 17TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2014, : 2529 - 2534
  • [49] Cooperation Skill Motivated Reinforcement Learning for Traffic Signal Control
    Xin, Jie
    Zeng, Jing
    Cong, Ya
    Jiang, Weihao
    Pu, Shiliang
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [50] Reinforcement learning agents for traffic signal control in oversaturated networks
    Medina, Juan C.
    Benekohal, Rahim F.
    T and DI Congress 2011: Integrated Transportation and Development for a Better Tomorrow - Proceedings of the 1st Congress of the Transportation and Development Institute of ASCE, 2011, : 132 - 141