Improving Channel Utilization in VANETs Using Q-Learning-Based Data Rate Congestion Control

被引:0
|
作者
Nuthalapati, Gnana Shilpa [1 ]
Jaekel, Arunita [1 ]
机构
[1] Univ Windsor, Windsor, ON, Canada
关键词
Vehicular Ad Hoc Network (VANET); Basic Safety Message (BSM); Congestion Control; Reinforcement Learning; Q-Learning; Channel Busy Ratio(CBR);
D O I
10.1109/AICCSA59173.2023.10479289
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Vehicular Ad-Hoc Network(VANET) is an emerging wireless technology vital to the Intelligent Transportation System(ITS), which aims to mitigate traffic problems and improve road safety. Many VANET safety applications rely on the periodic broadcast of vehicle status information in the form of Basic Safety Messages (BSMs). When the vehicle density increases, the wireless channel faces congestion resulting in unreliable safety applications. Various decentralized congestion control algorithms have been proposed to effectively decrease channel congestion by controlling transmission parameters such as message rate, transmission power, and data rate. This paper proposes a data rate-based congestion control technique using the Q-Learning algorithm to maintain the channel load below the target threshold. The congestion problem is formulated as a Markov Decision Process (MDP) and solved using a Q-learning algorithm. The goal is to select the most appropriate data rate when transmitting a BSM such that the channel load remains at an acceptable level. Data obtained from a simulated dynamic traffic environment is used to train the Q-Learning algorithm. Our results indicate that the proposed algorithm is able to achieve the target channel load while reducing packet loss compared to existing data rate-based approaches.
引用
收藏
页数:7
相关论文
共 50 条
  • [1] Q-learning-based congestion control strategy for information-centric networking
    Meng, Wei
    Zhang, Lingling
    INTERNET TECHNOLOGY LETTERS, 2021, 4 (05)
  • [2] QRAVDR: A deep Q-learning-based RSU-Assisted Video Data Routing algorithm for VANETs
    Ma, Huahong
    Li, Shuangjin
    Wu, Honghai
    Xing, Ling
    Zhang, Xiaohui
    AD HOC NETWORKS, 2025, 171
  • [3] Q-Learning-Based Inter-Networking Mobile Number Portability Congestion Control Mechanism
    Wang Anping
    Li Yuan
    Lin Lin
    CHINA COMMUNICATIONS, 2011, 8 (05) : 165 - 172
  • [4] Q-Learning-Based Multi-Rate Optimal Control for Process Industries
    Xia, Zhenxing
    Hu, Mengjie
    Dai, Wei
    Yan, Huaicheng
    Ma, Xiaoping
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2023, 70 (06) : 2006 - 2010
  • [5] Improving Scheduling Performance Using a Q-Learning-Based Leasing Policy for Clouds
    Foelling, Alexander
    Hofmann, Matthias
    EURO-PAR 2012 PARALLEL PROCESSING, 2012, 7484 : 337 - 349
  • [6] Q-learning-based H∞ control for LPV systems
    Wang, Hongye
    Wen, Jiwei
    Wan, Haiying
    Xue, Huiwen
    ASIAN JOURNAL OF CONTROL, 2024,
  • [7] Application-Based Congestion Control Policy for the Communication Channel in VANETs
    Sepulcre, Miguel
    Gozalvez, Javier
    Haerri, Jerome
    Hartenstein, Hannes
    IEEE COMMUNICATIONS LETTERS, 2010, 14 (10) : 951 - 953
  • [8] A Q-learning-based multi-rate transmission control scheme for RRC in WCDMA systems
    Ren, FC
    Chang, CJ
    Chen, YS
    13TH IEEE INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS, VOL 1-5, PROCEEDINGS: SAILING THE WAVES OF THE WIRELESS OCEANS, 2002, : 1422 - 1426
  • [9] Routing, power control and rate adaptation: A Q-learning-based cross-layer design
    Wang, Ke
    Chai, Teck Yoong
    Wong, Wai-Choong
    COMPUTER NETWORKS, 2016, 102 : 20 - 37
  • [10] Iterative Q-Learning-Based Nonlinear Optimal Tracking Control
    Wei, Qinglai
    Song, Ruizhuo
    Xu, Yancai
    Liu, Derong
    PROCEEDINGS OF 2016 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2016,