Resource Allocation in V2X Communications Based on Multi-Agent Reinforcement Learning with Attention Mechanism

被引:9
|
作者
Ding, Yuanfeng [1 ]
Huang, Yan [2 ]
Tang, Li [2 ]
Qin, Xizhong [1 ]
Jia, Zhenhong [1 ]
机构
[1] Xinjiang Univ, Coll Informat Sci & Engn, Urumqi 830000, Peoples R China
[2] China Mobile Commun Grp Xinjiang Co Ltd, Network Dept, Urumqi 830000, Peoples R China
关键词
vehicle-to-everything; resource allocation; attention mechanism; multi-agent reinforcement learning; low latency; VEHICULAR COMMUNICATIONS; LOW-LATENCY; INTERNET;
D O I
10.3390/math10193415
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
In this paper, we study the joint optimization problem of the spectrum and power allocation for multiple vehicle-to-infrastructure (V2I) and vehicle-to-vehicle (V2V) users in cellular vehicle-to-everything (C-V2X) communication, aiming to maximize the sum rate of V2I links while satisfying the low latency requirements of V2V links. However, channel state information (CSI) is hard to obtain accurately due to the mobility of vehicles. In addition, the effective sensing of state information among vehicles becomes difficult in an environment with complex and diverse information, which is detrimental to vehicles collaborating for resource allocation. Thus, we propose a framework of multi-agent deep reinforcement learning based on attention mechanism (AMARL) to improve the V2X communication performance. Specifically, for vehicle mobility, we model the problem as a multi-agent reinforcement learning process, where each V2V link is regarded an agent and all agents jointly intercommunicate with the environment. Each agent allocates spectrum and power through its deep Q network (DQN). To enhance effective intercommunication and the sense of collaboration among vehicles, we introduce an attention mechanism to focus on more relevant information, which in turn reduces the signaling overhead and optimizes their communication performance more explicitly. Experimental results show that the proposed AMARL-based approach can satisfy the requirements of a high rate for V2I links and low latency for V2V links. It also has an excellent adaptability to environmental change.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] Multi-Agent Reinforcement Learning- Based Resource Management for V2X Communication
    Zhao, Nan
    Wang, Jiaye
    Jin, Bo
    Wang, Ru
    Wu, Minghu
    Liu, Yu
    Zheng, Lufeng
    [J]. INTERNATIONAL JOURNAL OF MOBILE COMPUTING AND MULTIMEDIA COMMUNICATIONS, 2023, 14 (01)
  • [2] QoS based Multi-Agent vs. Single-Agent Deep Reinforcement Learning for V2X Resource Allocation
    Bhadauria, Shubhangi
    Ravichandran, Lavanya
    Roth-Mandutz, Elke
    Fischer, Georg
    [J]. 2021 IEEE SYMPOSIUM ON FUTURE TELECOMMUNICATION TECHNOLOGIES (SOFTT), 2021, : 39 - 45
  • [3] Decentralized Multi-Agent DQN-Based Resource Allocation for Heterogeneous Traffic in V2X Communications
    Lee, Insung
    Kim, Duk Kyung
    [J]. IEEE ACCESS, 2024, 12 : 3070 - 3084
  • [4] Decentralized Multi-Agent DQN-Based Resource Allocation for Heterogeneous Traffic in V2X Communications
    Inha University, Department of Information and Communication Engineering, Incheon
    22212, Korea, Republic of
    [J]. IEEE Access, 2024, (3070-3084) : 3070 - 3084
  • [5] Deep Reinforcement Learning-Based Resource Allocation for Cellular V2X Communications
    Chung, Yi-Ching
    Chang, Hsin-Yuan
    Chang, Ronald Y.
    Chung, Wei -Ho
    [J]. 2023 IEEE 97TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2023-SPRING, 2023,
  • [6] Meta-Reinforcement Learning Based Resource Allocation for Dynamic V2X Communications
    Yuan, Yi
    Zheng, Gan
    Wong, Kai-Kit
    Letaief, Khaled B.
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2021, 70 (09) : 8964 - 8977
  • [7] Multi-agent reinforcement learning for long-term network resource allocation through auction: A V2X application
    Tan, Jing
    Khalili, Ramin
    Karl, Holger
    Hecker, Artur
    [J]. COMPUTER COMMUNICATIONS, 2022, 194 : 333 - 347
  • [8] Multi-Agent DRL-Based Two-Timescale Resource Allocation for Network Slicing in V2X Communications
    Lu, Binbin
    Wu, Yuan
    Qian, Liping
    Zhou, Sheng
    Zhang, Haixia
    Lu, Rongxing
    [J]. IEEE Transactions on Network and Service Management, 2024, 21 (06): : 6744 - 6758
  • [9] QoS based Deep Reinforcement Learning for V2X Resource Allocation
    Bhadauria, Shubhangi
    Shabbir, Zohaib
    Roth-Mandutz, Elke
    Fischer, Georg
    [J]. 2020 IEEE INTERNATIONAL BLACK SEA CONFERENCE ON COMMUNICATIONS AND NETWORKING (BLACKSEACOM), 2020,
  • [10] Spectrum Management with Congestion Avoidance for V2X Based on Multi-Agent Reinforcement Learning
    Althamary, Ibrahim
    Lin, Jun-Yong
    Huang, Chih-Wei
    [J]. 2020 IEEE GLOBECOM WORKSHOPS (GC WKSHPS), 2020,