Joint Resource Allocation for UAV-Assisted V2X Communication With Mean Field Multi-Agent Reinforcement Learning

被引:0
|
作者
Xu, Yue [1 ,2 ]
Zheng, Linjiang [1 ,2 ]
Wu, Xiao [1 ,2 ,3 ]
Tang, Yi [4 ]
Liu, Weining [1 ,2 ]
Sun, Dihua [2 ,5 ]
机构
[1] Chongqing Univ, Coll Comp Sci, Chongqing 400044, Peoples R China
[2] Chongqing Univ, Key Lab Cyber Phys Social Dependable Serv Computat, Chongqing 400044, Peoples R China
[3] Chongqing Shouxun Technol Co, Chongqing 401120, Peoples R China
[4] Chongqing Expressway Grp Co Ltd, Chongqing 401147, Peoples R China
[5] Chongqing Univ, Coll Automat, Chongqing 400044, Peoples R China
基金
国家重点研发计划;
关键词
Resource management; Quality of service; Autonomous aerial vehicles; Optimization; NOMA; Motion control; Complexity theory; Vehicular communication network; joint resource allocation; mean-field game theory; multi-agent deep reinforcement learning (MARL); INTERNET;
D O I
10.1109/TVT.2024.3466116
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The Vehicle-to-Everything (V2X) communication, as the fundamental part of intelligent transport system, has the potential to increase road safety and traffic efficiency. However, conventional static infrastructures like roadside units (RSUs) often encounter overload issues due to the uneven spatiotemporal distribution of vehicles. Although the line-of-sight (LoS) propagation characteristics and high mobility of autonomois aerial vehicles (AAVs) have brought about UAV-assisted vehicular communication. The scarce spectrum resources, complex interference, restricted energy budgets, and the mobility of automobiles still pose significant challenges. In this paper, we combine mean-field game (MFG) theory with multi-agent reinforcement learning (MARL) to allocate resources for RSUs and UAVs in non-orthogonal multiple access (NOMA) V2X communication networks. To find rational and reasonable global solutions for infrastructures under power and QoS constraints, a joint sub-band scheduling and transmit power allocation problem is addressed. The MARL technique is utilized to endow agents with the capability of self-learning. MFG theory is employed to tackle the tremendous overhead in agent interactions. The integration of MFG and MARL enables infrastructures to act as agents, engaging in mutual interactions and considering the impact of the surrounding environment, to achieve maximum global energy efficiency. Simulation results demonstrate the effectiveness of UAV-assisted V2X communication and prove that the proposed method outperforms a state-of-the-art resource allocation scheme in both average energy efficiency and probability of failure.
引用
收藏
页码:1209 / 1223
页数:15
相关论文
共 50 条
  • [31] Joint Resource Allocation for V2X Sensing and Communication Based on MADDPG
    Zhong, Zhiyong
    Peng, Zhangyou
    IEEE ACCESS, 2025, 13 : 12764 - 12776
  • [32] Performance Improvement for UAV-Assisted Mobile Edge Computing with Multi-Agent Deep Reinforcement Learning
    Suzuki, Kohei
    Sugawara, Toshiharu
    2024 INTERNATIONAL CONFERENCE ON INNOVATIONS IN INTELLIGENT SYSTEMS AND APPLICATIONS, INISTA, 2024,
  • [33] Spectrum Management with Congestion Avoidance for V2X Based on Multi-Agent Reinforcement Learning
    Althamary, Ibrahim
    Lin, Jun-Yong
    Huang, Chih-Wei
    2020 IEEE GLOBECOM WORKSHOPS (GC WKSHPS), 2020,
  • [34] QoS based Deep Reinforcement Learning for V2X Resource Allocation
    Bhadauria, Shubhangi
    Shabbir, Zohaib
    Roth-Mandutz, Elke
    Fischer, Georg
    2020 IEEE INTERNATIONAL BLACK SEA CONFERENCE ON COMMUNICATIONS AND NETWORKING (BLACKSEACOM), 2020,
  • [35] Joint Communication Resource Allocation and Velocity Optimization in Advanced Air Mobility via Multi-agent Reinforcement Learning
    Han, Ruixuan
    Li, Summer
    Knoblock, Eric J.
    Gasper, Michael R.
    Li, Hongxiang
    Apaza, Rafael D.
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 1501 - 1506
  • [36] Matching Combined Heterogeneous Multi-Agent Reinforcement Learning for Resource Allocation in NOMA-V2X Networks
    Gao, Ang
    Zhu, Ziqing
    Zhang, Jiankang
    Liang, Wei
    Hu, Yansu
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (10) : 15109 - 15124
  • [37] Spectrum-Energy-Efficient Mode Selection and Resource Allocation for Heterogeneous V2X Networks: A Federated Multi-Agent Deep Reinforcement Learning Approach
    Gui, Jinsong
    Lin, Liyan
    Deng, Xiaoheng
    Cai, Lin
    IEEE-ACM TRANSACTIONS ON NETWORKING, 2024, 32 (03) : 2689 - 2704
  • [38] Ensuring Threshold AoI for UAV-Assisted Mobile Crowdsensing by Multi-Agent Deep Reinforcement Learning With Transformer
    Wang, Hao
    Liu, Chi Harold
    Yang, Haoming
    Wang, Guoren
    Leung, Kin K.
    IEEE-ACM TRANSACTIONS ON NETWORKING, 2024, 32 (01) : 566 - 581
  • [39] Task Offloading with LLM-Enhanced Multi-Agent Reinforcement Learning in UAV-Assisted Edge Computing
    Zhu, Feifan
    Huang, Fei
    Yu, Yantao
    Liu, Guojin
    Huang, Tiancong
    SENSORS, 2025, 25 (01)
  • [40] Adaptive mean field multi-agent reinforcement learning
    Wang, Xiaoqiang
    Ke, Liangjun
    Zhang, Gewei
    Zhu, Dapeng
    INFORMATION SCIENCES, 2024, 669