Collaborative Traffic Signal Automation Using Deep Q-Learning

被引:1
|
作者
Hassan, Muhammad Ahmed [1 ]
Elhadef, Mourad [2 ]
Khan, Muhammad Usman Ghani [1 ]
机构
[1] Univ Engn & Technol, Natl Ctr Artificial Intelligence NCAI, Lahore 54000, Pakistan
[2] Abu Dhabi Univ, Coll Engn, Comp Sci & Informat Technol Dept, Abu Dhabi, U Arab Emirates
关键词
Traffic congestion; Junctions; Collaboration; Roads; Optimization; Deep learning; Q-learning; Reinforcement learning; Multi-agent systems; Decentralized applications; Computer vision; Reinforcement learning (RL); multi-agent deep reinforcement learning (MDRL); computer vision; deep q-network (DQN); simulation of urban mobility (SUMO); decentralized multi-agent network (DMN); COORDINATION;
D O I
10.1109/ACCESS.2023.3331317
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multi-agent deep reinforcement learning (MDRL) is a popular choice for multi-intersection traffic signal control, generating decentralized cooperative traffic signal strategies in specific traffic networks. Despite its widespread use, current MDRL algorithms have certain limitations. Firstly, the specific multi-agent settings impede the transferability and generalization of traffic signal policies to different traffic networks. Secondly, existing MDRL algorithms struggle to adapt to a varying number of vehicles crossing the traffic networks. This paper introduces a novel Cooperative Multi-Agent Deep Q-Network (CMDQN) for traffic signal control to alleviate traffic congestion. We have considered innovative features such as signal state at the preceding junction, the distance between junctions, visual features, and average speed. Our CMDQN applies a Decentralized Multi-Agent Network (DMN), employing a Markov Game abstraction for collaboration and state information sharing between agents to reduce waiting times. Our work employs Reinforcement Learning (RL) and a Deep Q-Network (DQN) for adaptive traffic signal control, leveraging Deep Computer Vision for real-time traffic density information. We also propose an intersection and a network-wide reward function to evaluate performance and optimize traffic flow. The developed system was evaluated through both synthetic and real-world experiments. The synthetic network is based on the Simulation of Urban Mobility (SUMO) traffic simulator, and the real-world network employed traffic data collected from installed cameras at actual traffic signals. Our results demonstrated improved performance across several key metrics when compared to the baseline model, reducing waiting times and improving traffic flow. This research presents a promising approach for cooperative traffic signal control, significantly contributing to the efforts to enhance traffic management systems.
引用
收藏
页码:136015 / 136032
页数:18
相关论文
共 50 条
  • [1] Adaptive Traffic Signal Control with Deep Recurrent Q-learning
    Zeng, Jinghong
    Hu, Jianming
    Zhang, Yi
    [J]. 2018 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2018, : 1215 - 1220
  • [2] Adaptive traffic signal control using deep Q-learning: case study on optimal implementations
    Pan, Guangyuan
    Muresan, Matthew
    Fu, Liping
    [J]. CANADIAN JOURNAL OF CIVIL ENGINEERING, 2023, 50 (06) : 488 - 497
  • [3] Intelligent Traffic Light Control Using Collaborative Q-Learning Algorithms
    Rosyadi, Andhika Rizky
    Wirayuda, Tjokorda Agung Budi
    Al-Faraby, Said
    [J]. 2016 4TH INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGY (ICOICT), 2016,
  • [4] Deep Reinforcement Q-Learning for Intelligent Traffic Signal Control with Partial Detection
    Romain Ducrocq
    Nadir Farhi
    [J]. International Journal of Intelligent Transportation Systems Research, 2023, 21 : 192 - 206
  • [5] Deep Reinforcement Q-Learning for Intelligent Traffic Signal Control with Partial Detection
    Ducrocq, Romain
    Farhi, Nadir
    [J]. INTERNATIONAL JOURNAL OF INTELLIGENT TRANSPORTATION SYSTEMS RESEARCH, 2023, 21 (01) : 192 - 206
  • [6] Deep Q-learning Approach based on CNN and XGBoost for Traffic Signal Control
    Faqir, Nada
    Loqman, Chakir
    Boumhidi, Jaouad
    [J]. INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2022, 13 (09) : 529 - 536
  • [7] Intelligent Traffic Signal Synchronization Using Fuzzy Logic and Q-Learning
    Iyer, Vignesh
    Jadhav, Rashmi
    Mavchi, Unnati
    Abraham, Jibi
    [J]. 2016 INTERNATIONAL CONFERENCE ON COMPUTING, ANALYTICS AND SECURITY TRENDS (CAST), 2016, : 156 - 161
  • [8] Modeling of traffic flow using cellular automata and traffic signal control by Q-learning
    Umemoto, Kiyoshi
    Shin, Ji-Sun
    Ohshita, Tomofumi
    Osuki, Yohei
    Miyazaki, Michio
    Lee, Hee-Hyol
    [J]. Proceedings of the 14th International Symposium on Artificial Life and Robotics, AROB 14th'09, 2009, : 43 - 45
  • [9] Traffic Signal Control: a Double Q-learning Approach
    Agafonov, Anton
    Myasnikov, Vladislav
    [J]. PROCEEDINGS OF THE 2021 16TH CONFERENCE ON COMPUTER SCIENCE AND INTELLIGENCE SYSTEMS (FEDCSIS), 2021, : 365 - 369
  • [10] Evaluating Action Durations for Adaptive Traffic Signal Control Based On Deep Q-Learning
    Celtek, Seyit Alperen
    Durdu, Akif
    Ali, Muzamil Eltejani Mohammed
    [J]. INTERNATIONAL JOURNAL OF INTELLIGENT TRANSPORTATION SYSTEMS RESEARCH, 2021, 19 (03) : 557 - 571