Spectrum Sharing in Vehicular Networks Based on Multi-Agent Reinforcement Learning

被引:306
|
作者
Liang, Le [1 ,2 ]
Ye, Hao [1 ]
Li, Geoffrey Ye [1 ]
机构
[1] Georgia Inst Technol, Sch Elect & Comp Engn, Atlanta, GA 30339 USA
[2] Intel Labs, Hillsboro, OR 97124 USA
基金
美国国家科学基金会;
关键词
Vehicular networks; distributed spectrum access; spectrum and power allocation; multi-agent reinforcement learning; RESOURCE-ALLOCATION;
D O I
10.1109/JSAC.2019.2933962
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This paper investigates the spectrum sharing problem in vehicular networks based on multi-agent reinforcement learning, where multiple vehicle-to-vehicle (V2V) links reuse the frequency spectrum preoccupied by vehicleto-infrastructure (V2I) links. Fast channel variations in high mobility vehicular environments preclude the possibility of collecting accurate instantaneous channel state information at the base station for centralized resource management. In response, we model the resource sharing as a multi-agent reinforcement learning problem, which is then solved using a fingerprint-based deep Q-network method that is amenable to a distributed implementation. The V2V links, each acting as an agent, collectively interact with the communication environment, receive distinctive observations yet a common reward, and learn to improve spectrum and power allocation through updating Q-networks using the gained experiences. We demonstrate that with a proper reward design and training mechanism, the multiple V2V agents successfully learn to cooperate in a distributed way to simultaneously improve the sum capacity of V2I links and payload delivery rate of V2V links.
引用
收藏
页码:2282 / 2292
页数:11
相关论文
共 50 条
  • [31] Multi-Agent Reinforcement Learning for Dynamic Spectrum Access
    Jiang, Huijuan
    Wang, Tianyu
    Wang, Shaowei
    [J]. ICC 2019 - 2019 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2019,
  • [32] Fast Spectrum Sharing in Vehicular Networks: A Meta Reinforcement Learning Approach
    Huang, Kai
    Luo, Zezhou
    Liang, Le
    Jin, Shi
    [J]. 2022 IEEE 96TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2022-FALL), 2022,
  • [33] Multi-Agent Reinforcement Learning with Two Step Intention Sharing
    Wu, Jun-Feng
    Wang, Wen
    Wang, Liang
    Tao, Xian-Ping
    Hu, Hao
    Wu, Hai-Jun
    [J]. Jisuanji Xuebao/Chinese Journal of Computers, 2023, 46 (09): : 1820 - 1837
  • [34] Selectively Sharing Experiences Improves Multi-Agent Reinforcement Learning
    Gerstgrasser, Matthias
    Danino, Tom
    Keren, Sarah
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [35] Improving scalability of multi-agent reinforcement learning with parameters sharing
    Yang, Ning
    Shi, PeiChang
    Ding, Bo
    Feng, Dawei
    [J]. 2022 IEEE 13TH INTERNATIONAL CONFERENCE ON JOINT CLOUD COMPUTING (JCC 2022), 2022, : 37 - 42
  • [36] Scaling Multi-Agent Reinforcement Learning with Selective Parameter Sharing
    Christianos, Filippos
    Papoudakis, Georgios
    Rahman, Arrasy
    Albrecht, Stefano, V
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [37] A federated multi-agent deep reinforcement learning for vehicular fog computing
    Balawal Shabir
    Anis U. Rahman
    Asad Waqar Malik
    Rajkumar Buyya
    Muazzam A. Khan
    [J]. The Journal of Supercomputing, 2023, 79 : 6141 - 6167
  • [38] A federated multi-agent deep reinforcement learning for vehicular fog computing
    Shabir, Balawal
    Rahman, Anis U.
    Malik, Asad Waqar
    Buyya, Rajkumar
    Khan, Muazzam A.
    [J]. JOURNAL OF SUPERCOMPUTING, 2023, 79 (06): : 6141 - 6167
  • [39] The Application of Multi-Agent Reinforcement Learning in UAV Networks
    Cui, Jingjing
    Liu, Yuanwei
    Nallanathan, Arumugam
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS (ICC WORKSHOPS), 2019,
  • [40] A Cooperative Spectrum Sensing With Multi-Agent Reinforcement Learning Approach in Cognitive Radio Networks
    Gao, Ang
    Du, Chengyuan
    Ng, Soon Xin
    Liang, Wei
    [J]. IEEE COMMUNICATIONS LETTERS, 2021, 25 (08) : 2604 - 2608