Multi-Agent Deep Reinforcement Learning for Spectral Efficiency Optimization in Vehicular Optical Camera Communications

被引:1
|
作者
Islam, Amirul [1 ]
Thomos, Nikolaos [1 ]
Musavian, Leila [1 ]
机构
[1] Univ Essex, Sch Comp Sci & Elect Engn, Colchester CO4 3SQ, England
基金
英国工程与自然科学研究理事会;
关键词
Reliability; Cameras; Q-learning; Modulation; Light emitting diodes; Radio frequency; Deep learning; Vehicular communication; deep reinforcement learning; optical camera communication; spectral efficiency maximization; Lagrangian relaxation; low latency; VISIBLE-LIGHT COMMUNICATION;
D O I
10.1109/TMC.2023.3278277
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this article, we propose a vehicular optical camera communication system that can meet low bit error rate (BER) and ultra-low latency constraints. First, we formulate a sum spectral efficiency optimization problem that aims at finding the speed of vehicles and the modulation order that maximizes the sum spectral efficiency subject to reliability and latency constraints. This problem is mixed-integer programming with nonlinear constraints, and even for a small set of modulation orders, is NP-hard. To overcome the entailed high computational and time complexity which prevents its solution with traditional methods, we first model the optimization problem as a partially observable Markov decision process. We then solve it using an independent Q-learning framework, where each vehicle acts as an independent agent. Since the state-action space is large we then adopt deep reinforcement learning (DRL) to solve it efficiently. As the problem is constrained, we employ the Lagrange relaxation approach prior to solving it using the DRL framework. Simulation results demonstrate that the proposed DRL-based optimization scheme can effectively learn how to maximize the sum spectral efficiency while satisfying the BER and ultra-low latency constraints. The evaluation further shows that our scheme can achieve superior performance compared to radio frequency-based vehicular communication systems and other vehicular OCC variants of our scheme.
引用
收藏
页码:3666 / 3679
页数:14
相关论文
共 50 条
  • [41] Multi-Agent Deep Reinforcement Learning with Human Strategies
    Thanh Nguyen
    Ngoc Duy Nguyen
    Nahavandi, Saeid
    2019 IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL TECHNOLOGY (ICIT), 2019, : 1357 - 1362
  • [42] Competitive Evolution Multi-Agent Deep Reinforcement Learning
    Zhou, Wenhong
    Chen, Yiting
    Li, Jie
    PROCEEDINGS OF THE THIRD INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND APPLICATION ENGINEERING (CSAE2019), 2019,
  • [43] Strategic Interaction Multi-Agent Deep Reinforcement Learning
    Zhou, Wenhong
    Li, Jie
    Chen, Yiting
    Shen, Lin-Cheng
    IEEE ACCESS, 2020, 8 : 119000 - 119009
  • [44] Cooperative Exploration for Multi-Agent Deep Reinforcement Learning
    Liu, Iou-Jen
    Jain, Unnat
    Yeh, Raymond A.
    Schwing, Alexander G.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [45] A review of cooperative multi-agent deep reinforcement learning
    Oroojlooy, Afshin
    Hajinezhad, Davood
    APPLIED INTELLIGENCE, 2023, 53 (11) : 13677 - 13722
  • [46] Multi-Agent Deep Reinforcement Learning for Walker Systems
    Park, Inhee
    Moh, Teng-Sheng
    20TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2021), 2021, : 490 - 495
  • [47] Action Markets in Deep Multi-Agent Reinforcement Learning
    Schmid, Kyrill
    Belzner, Lenz
    Gabor, Thomas
    Phan, Thomy
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2018, PT II, 2018, 11140 : 240 - 249
  • [48] Strategic Interaction Multi-Agent Deep Reinforcement Learning
    Zhou, Wenhong
    Li, Jie
    Chen, Yiting
    Shen, Lin-Cheng
    IEEE Access, 2020, 8 : 119000 - 119009
  • [49] Teaching on a Budget in Multi-Agent Deep Reinforcement Learning
    Ilhan, Ercument
    Gow, Jeremy
    Perez-Liebana, Diego
    2019 IEEE CONFERENCE ON GAMES (COG), 2019,
  • [50] Research Progress of Multi-Agent Deep Reinforcement Learning
    Ding, Shi-Feiu
    Du, Weiu
    Zhang, Jianu
    Guo, Li-Liu
    Ding, Ding
    Jisuanji Xuebao/Chinese Journal of Computers, 2024, 47 (07): : 1547 - 1567