An improved deep reinforcement learning routing technique for collision-free VANET

被引:1
|
作者
Upadhyay, Pratima [1 ]
Marriboina, Venkatadri [2 ]
Goyal, Samta Jain [1 ]
Kumar, Sunil [3 ]
El-Kenawy, El-Sayed M. [4 ]
Ibrahim, Abdelhameed [5 ]
Alhussan, Amel Ali [6 ]
Khafaga, Doaa Sami [6 ]
机构
[1] Amity Univ Gwalior, Amity Sch Engn & Technol, Dept Comp Sci & Engn, Gwalior, Madhya Pradesh, India
[2] SVKMs NMIMS MPSTME Shirpur, Dept Comp Sci & Engn, Shirpur Campus, Shirpur, India
[3] Univ Petr & Energy Studies Dehradun, Sch Comp Sci, Dehra Dun, India
[4] Delta Higher Inst Engn & Technol, Dept Commun & Elect, Mansoura 35111, Egypt
[5] Mansoura Univ, Fac Engn, Comp Engn & Control Syst Dept, Mansoura 35516, Egypt
[6] Princess Nourah bint Abdulrahman Univ, Coll Comp & Informat Sci, Dept Comp Sci, POB 84428, Riyadh 11671, Saudi Arabia
关键词
COGNITIVE-RADIO; NETWORKS; COMMUNICATION; ALGORITHM;
D O I
10.1038/s41598-023-48956-y
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Vehicular Adhoc Networks (VANETs) is an emerging field that employs a wireless local area network (WLAN) characterized by an ad-hoc topology. Vehicular Ad Hoc Networks (VANETs) comprise diverse entities that are integrated to establish effective communication among themselves and with other associated services. Vehicular Ad Hoc Networks (VANETs) commonly encounter a range of obstacles, such as routing complexities and excessive control overhead. Nevertheless, the majority of these attempts were unsuccessful in delivering an integrated approach to address the challenges related to both routing and minimizing control overheads. The present study introduces an Improved Deep Reinforcement Learning (IDRL) approach for routing, with the aim of reducing the augmented control overhead. The IDRL routing technique that has been proposed aims to optimize the routing path while simultaneously reducing the convergence time in the context of dynamic vehicle density. The IDRL effectively monitors, analyzes, and predicts routing behavior by leveraging transmission capacity and vehicle data. As a result, the reduction of transmission delay is achieved by utilizing adjacent vehicles for the transportation of packets through Vehicle-to-Infrastructure (V2I) communication. The simulation outcomes were executed to assess the resilience and scalability of the model in delivering efficient routing and mitigating the amplified overheads concurrently. The method under consideration demonstrates a high level of efficacy in transmitting messages that are safeguarded through the utilization of vehicle-to-infrastructure (V2I) communication. The simulation results indicate that the IDRL routing approach, as proposed, presents a decrease in latency, an increase in packet delivery ratio, and an improvement in data reliability in comparison to other routing techniques currently available.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] An improved deep reinforcement learning routing technique for collision-free VANET
    Pratima Upadhyay
    Venkatadri Marriboina
    Samta Jain Goyal
    Sunil Kumar
    El-Sayed M. El-Kenawy
    Abdelhameed Ibrahim
    Amel Ali Alhussan
    Doaa Sami Khafaga
    [J]. Scientific Reports, 13
  • [2] Improved reinforcement learning for collision-free local path planning of dynamic obstacle
    Yang, Xiao
    Han, Qilong
    [J]. OCEAN ENGINEERING, 2023, 283
  • [3] COLLISION-FREE UAV NAVIGATION WITH A MONOCULAR CAMERA USING DEEP REINFORCEMENT LEARNING
    Chen, Yun
    Gonzalez-Prelcic, Nuria
    Heath, Robert W., Jr.
    [J]. PROCEEDINGS OF THE 2020 IEEE 30TH INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2020,
  • [4] Peduncle collision-free grasping based on deep reinforcement learning for tomato harvesting robot
    Li, Yajun
    Feng, Qingchun
    Zhang, Yifan
    Peng, Chuanlang
    Ma, Yuhang
    Liu, Cheng
    Ru, Mengfei
    Sun, Jiahui
    Zhao, Chunjiang
    [J]. COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2024, 216
  • [5] Collision-Free Deep Reinforcement Learning for Mobile Robots using Crash-Prevention Policy
    Kobelrausch, Markus D.
    Jantsch, Axel
    [J]. 2021 7TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND ROBOTICS (ICCAR), 2021, : 52 - 59
  • [6] Collision-free path planning for a guava-harvesting robot based on recurrent deep reinforcement learning
    Lin, Guichao
    Zhu, Lixue
    Li, Jinhui
    Zou, Xiangjun
    Tang, Yunchao
    [J]. COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2021, 188
  • [7] Collision-free path planning for welding manipulator via hybrid algorithm of deep reinforcement learning and inverse kinematics
    Zhong, Jie
    Wang, Tao
    Cheng, Lianglun
    [J]. COMPLEX & INTELLIGENT SYSTEMS, 2022, 8 (03) : 1899 - 1912
  • [8] Collision-free path planning for welding manipulator via hybrid algorithm of deep reinforcement learning and inverse kinematics
    Jie Zhong
    Tao Wang
    Lianglun Cheng
    [J]. Complex & Intelligent Systems, 2022, 8 : 1899 - 1912
  • [9] SafeLight: A Reinforcement Learning Method toward Collision-Free Traffic Signal Control
    Du, Wenlu
    Ye, Junyi
    Gu, Jingyi
    Li, Jing
    Wei, Hua
    Wang, Guiling
    [J]. THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 12, 2023, : 14801 - 14810
  • [10] Collision-Free Path Planning for Multiple Drones Based on Safe Reinforcement Learning
    Chen, Hong
    Huang, Dan
    Wang, Chenggang
    Ding, Lu
    Song, Lei
    Liu, Hongtao
    [J]. DRONES, 2024, 8 (09)