A Model-Based Reinforcement Learning Protocol for Routing in Vehicular Ad hoc Network

被引:9
|
作者
Jafarzadeh, Omid [1 ]
Dehghan, Mehdi [1 ]
Sargolzaey, Hadi [1 ]
Esnaashari, Mohammad Mehdi [2 ]
机构
[1] Islamic Azad Univ, Dept Elect Comp & IT Engn, Qazvin Branch, Qazvin, Iran
[2] KN Toosi Univ Technol, Fac Comp Engn, Tehran, Iran
关键词
VANET; Routing; Reinforcement Learning; Fuzzy Logic; TRANSMISSION;
D O I
10.1007/s11277-021-09166-9
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Todays by equipping vehicles with wireless technologies, Vehicular Ad Hoc Network (VANET) has been emerged. This type of network can be utilized in many fields such as emergency, safety or entertainment. It is also considered as a main component of intelligent transportation system. However, due to the nodes velocity (vehicles velocity), varying density, obstacles and lack of fixed infrastructure, finding and maintaining a route between nodes are always challenging in VANET. Any routing protocol can be effective only if the nodes can learn and adapt themselves with such a dynamic environment. One way to achieve this adaptation is using machine learning techniques. In this paper we try to reach this goal by applying Multi-Agent Reinforcement Learning (MARL) that enables agents to solve routing optimization problems in a distributed way. Although model-free Reinforcement Learning (RL) schemes are introduced for this purpose, such techniques learn using a trial and error scheme in a real environment so they cannot reach an optimal policy in a short time. To deal with such a problem, we have proposed a mode-based RL based routing scheme. We have also developed a Fuzzy Logic (FL) system to evaluate the quality of links between neighbor nodes based on parameters such as velocity and connection quality. Outputs of this fuzzy system have been used to form the state transition model, needed in MARL. Results of evaluations have shown that our approach can improve some routing metrics like delivery ratio, end-to-end delay and traffic overhead.
引用
收藏
页码:975 / 1001
页数:27
相关论文
共 50 条
  • [1] A Model-Based Reinforcement Learning Protocol for Routing in Vehicular Ad hoc Network
    Omid Jafarzadeh
    Mehdi Dehghan
    Hadi Sargolzaey
    Mohammad Mehdi Esnaashari
    [J]. Wireless Personal Communications, 2022, 123 : 975 - 1001
  • [2] Priority based Routing Protocol in Vehicular Ad hoc Network
    Suthaputchakun, Chakkaphong
    Sun, Zhili
    [J]. 2011 IEEE SYMPOSIUM ON COMPUTERS AND COMMUNICATIONS (ISCC), 2011,
  • [3] A Road Selection Based Routing Protocol for Vehicular Ad Hoc Network
    Bhoi, Sourav Kumar
    Khilar, Pabitra Mohan
    [J]. WIRELESS PERSONAL COMMUNICATIONS, 2015, 83 (04) : 2463 - 2483
  • [4] A Road Selection Based Routing Protocol for Vehicular Ad Hoc Network
    Sourav Kumar Bhoi
    Pabitra Mohan Khilar
    [J]. Wireless Personal Communications, 2015, 83 : 2463 - 2483
  • [5] Adaptive Routing for an Ad Hoc Network Based on Reinforcement Learning
    Desai, Rahul
    Patil, B. P.
    [J]. INTERNATIONAL JOURNAL OF BUSINESS DATA COMMUNICATIONS AND NETWORKING, 2015, 11 (02) : 40 - 52
  • [6] Routing using reinforcement learning in vehicular ad hoc networks
    Saravanan, M.
    Ganeshkumar, P.
    [J]. COMPUTATIONAL INTELLIGENCE, 2020, 36 (02) : 682 - 697
  • [7] Vehicular Ad Hoc Network Mobility Models Applied for Reinforcement Learning Routing Algorithm
    Kulkarni, Shrirang Ambaji
    Rao, G. Raghavendra
    [J]. CONTEMPORARY COMPUTING, PT 2, 2010, 95 : 230 - +
  • [8] A Routing Protocol Based on CP Neural Network for Vehicular Ad Hoc Networks
    Wen, Wei
    [J]. 2020 4TH INTERNATIONAL CONFERENCE ON ELECTRICAL, AUTOMATION AND MECHANICAL ENGINEERING, 2020, 1626
  • [9] A Social Utility-Based Routing Protocol in Vehicular Ad Hoc Network
    Tang, Lun
    Gu, Xiaoqin
    Han, Jie
    Chen, Qianbin
    [J]. INTERNATIONAL CONFERENCE ON COMPUTATIONAL AND INFORMATION SCIENCES (ICCIS 2014), 2014, : 827 - 832
  • [10] Reinforcement Learning Based Mobility Adaptive Routing for Vehicular Ad-Hoc Networks
    Wu, Jinqiao
    Fang, Min
    Li, Xiao
    [J]. WIRELESS PERSONAL COMMUNICATIONS, 2018, 101 (04) : 2143 - 2171