An Intelligent Traffic Signal Coordination Method Based on Asynchronous Decision-Making

被引:0
|
作者
Gao H. [1 ]
Luo J. [1 ]
Cai Q. [1 ]
Zheng Y. [1 ]
机构
[1] College of Computer Science and Electronic Engineering, Hunan University, Changsha
基金
中国国家自然科学基金;
关键词
asynchronous decision-making; coordination control; edge computing; reinforcement learning; traffic signal control;
D O I
10.7544/issn1000-1239.202220773
中图分类号
学科分类号
摘要
The intelligent traffic signal control system is a component of the intelligent traffic system (ITS), offering real-time services for the creation of a safe and efficient traffic environment. However, due to restricted communication, conventional adaptive traffic signal-controlled methods are unable to fulfill the complex and changing traffic requirements. A multi-agent adaptive coordination method (ADM) based on asynchronous decision-making and edge computing is presented to address the issues of communication delay and a decrease in signal utilization. Firstly, the end-side-cloud architecture is proposed for real-time environmental information collection and related processing. Then, to enhance the agent coordination process, asynchronous communication is implemented. An approach for calculating the decision cycle of the agent is presented, and an asynchronous decision mechanism employing multiple agents’ decision cycles is devised. The experimental results show that edge computing technology provides a good solution for traffic signal control scenarios with high real-time requirements. In addition, compared with the fixed time (FT) and independent Q-learning decision algorithm (IQA), ADM achieves collaboration among the agents based on the asynchronous decision mechanism and the neighbor information base, and reduces the average vehicle waiting length and improves intersection time utilization. © 2023 Science Press. All rights reserved.
引用
收藏
页码:2797 / 2805
页数:8
相关论文
共 24 条
  • [1] Zhiguang Cao, Siwei Jiang, Jie Zhang, Et al., A unified framework for vehicle rerouting and traffic light control to reduce traffic congestion[J], IEEE Transactions on Intelligent Transportation Systems, 18, 7, pp. 1958-1973, (2017)
  • [2] Guo Xian, Intensive Learning in Simple Terms: Introduction to Principles, (2018)
  • [3] Genders W, Razavi S., Using a deep reinforcement learning agent for traffic signal control, (2016)
  • [4] Pengyuan Zhou, Xianfu Chen, Zhi Liu, Et al., DRLE: Decentralized reinforcement learning at the edge for traffic light control in the IoV[J], IEEE Transactions on Intelligent Transportation Systems, 22, 4, pp. 2262-2273, (2021)
  • [5] Jaleel A, Hassan M A, Mahmood T, Et al., Reducing congestion in an intelligent traffic system with collaborative and adaptive signaling on the edge[J], IEEE Access, 8, pp. 205396-205410, (2020)
  • [6] Tian Tan, Feng Bao, Yue Deng, Et al., Cooperative deep reinforcement learning for large-scale traffic grid signal control[J], IEEE Transactions on Cybernetics, 50, 6, pp. 2687-2700, (2020)
  • [7] Wang Yingduo, Adaptive control of intersections based on deep reinforcement learning, (2017)
  • [8] Yu Jinzhong, Research on traffic signal control of urban road network based on multi-agent, (2019)
  • [9] Xinhai Xia, Learning coordinated control under local game interaction of urban traffic signals[J], Computer Engineering and Applications, 56, 23, (2020)
  • [10] Zhaowei Qu, Pan Zhaotian, Chen Yongheng, Et al., A distributed control method for urban networks using multi-agent reinforcement learning based on regional mixed strategy nash-equilibrium[J], IEEE Access, 8, pp. 19750-19766, (2020)