Graph neural network and reinforcement learning for multi-agent cooperative control of connected autonomous vehicles

被引:112
|
作者
Chen, Sikai [1 ,2 ,3 ]
Dong, Jiqian [1 ,2 ]
Ha, Paul [1 ,2 ]
Li, Yujie [1 ,2 ]
Labi, Samuel [1 ,2 ]
机构
[1] Purdue Univ, Ctr Connected & Automated Transportat CCAT, W Lafayette, IN 47907 USA
[2] Purdue Univ, Lyles Sch Civil Engn, W Lafayette, IN 47907 USA
[3] Carnegie Mellon Univ, Sch Comp Sci, Robot Inst, Pittsburgh, PA 15213 USA
关键词
CRACK DETECTION; TRAJECTORY OPTIMIZATION; DYNAMIC CLASSIFICATION; INTERSECTION CONTROL; MODEL;
D O I
10.1111/mice.12702
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
A connected autonomous vehicle (CAV) network can be defined as a set of connected vehicles including CAVs that operate on a specific spatial scope that may be a road network, corridor, or segment. The spatial scope constitutes an environment where traffic information is shared and instructions are issued for controlling the CAVs movements. Within such a spatial scope, high-level cooperation among CAVs fostered by joint planning and control of their movements can greatly enhance the safety and mobility performance of their operations. Unfortunately, the highly combinatory and volatile nature of CAV networks due to the dynamic number of agents (vehicles) and the fast-growing joint action space associated with multi-agent driving tasks pose difficultly in achieving cooperative control. The problem is NP-hard and cannot be efficiently resolved using rule-based control techniques. Also, there is a great deal of information in the literature regarding sensing technologies and control logic in CAV operations but relatively little information on the integration of information from collaborative sensing and connectivity sources. Therefore, we present a novel deep reinforcement learning-based algorithm that combines graphic convolution neural network with deep Q-network to form an innovative graphic convolution Q network that serves as the information fusion module and decision processor. In this study, the spatial scope we consider for the CAV network is a multi-lane road corridor. We demonstrate the proposed control algorithm using the application context of freeway lane-changing at the approaches to an exit ramp. For purposes of comparison, the proposed model is evaluated vis-a-vis traditional rule-based and long short-term memory-based fusion models. The results suggest that the proposed model is capable of aggregating information received from sensing and connectivity sources and prescribing efficient operative lane-change decisions for multiple CAVs, in a manner that enhances safety and mobility. That way, the operational intentions of individual CAVs can be fulfilled even in partially observed and highly dynamic mixed traffic streams. The paper presents experimental evidence to demonstrate that the proposed algorithm can significantly enhance CAV operations. The proposed algorithm can be deployed at roadside units or cloud platforms or other centralized control facilities.
引用
收藏
页码:838 / 857
页数:20
相关论文
共 50 条
  • [1] Multi-Agent Deep Reinforcement Learning for Cooperative Connected Vehicles
    Kwon, Dohyun
    Kim, Joongheon
    2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [2] Multi-agent reinforcement learning for cooperative lane changing of connected and autonomous vehicles in mixed traffic
    Zhou W.
    Chen D.
    Yan J.
    Li Z.
    Yin H.
    Ge W.
    Autonomous Intelligent Systems, 2022, 2 (01):
  • [3] A Multi-agent Reinforcement Learning Based Control Method for Connected and Autonomous Vehicles in A Mixed Platoon
    Xu Y.
    Shi Y.
    Tong X.
    Chen S.
    Ge Y.
    IEEE Transactions on Vehicular Technology, 2024, 73 (11) : 1 - 14
  • [4] Autonomous and cooperative control of UAV cluster with multi-agent reinforcement learning
    Xu, D.
    Chen, G.
    AERONAUTICAL JOURNAL, 2022, 126 (1300): : 932 - 951
  • [5] Multi-agent reinforcement learning for autonomous vehicles: a survey
    Dinneweth J.
    Boubezoul A.
    Mandiau R.
    Espié S.
    Autonomous Intelligent Systems, 2022, 2 (01):
  • [6] Multi-Agent Reinforcement Learning for Autonomous On Demand Vehicles
    Boyali, Ali
    Hashimoto, Naohisa
    John, Vijay
    Acarman, Tankut
    2019 30TH IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV19), 2019, : 1461 - 1468
  • [7] Multi-agent reinforcement learning for safe lane changes by connected and autonomous vehicles: A survey
    Hegde, Bharathkumar
    Bouroche, Melanie
    AI COMMUNICATIONS, 2024, 37 (02) : 203 - 222
  • [8] A study on multi-agent reinforcement learning for autonomous distribution vehicles
    Serap Ergün
    Iran Journal of Computer Science, 2023, 6 (4) : 297 - 305
  • [9] Multi-Agent Deep Reinforcement Learning to Manage Connected Autonomous Vehicles at Tomorrow's Intersections
    Antonio, Guillen-Perez
    Maria-Dolores, Cano
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2022, 71 (07) : 7033 - 7043
  • [10] A Multi-Agent Reinforcement Learning Approach for Safe and Efficient Behavior Planning of Connected Autonomous Vehicles
    Han, Songyang
    Zhou, Shanglin
    Wang, Jiangwei
    Pepin, Lynn
    Ding, Caiwen
    Fu, Jie
    Miao, Fei
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (05) : 3654 - 3670