An Intelligent Path Planning Scheme of Autonomous Vehicles Platoon Using Deep Reinforcement Learning on Network Edge

被引:0
|
作者
Chen, Chen [1 ]
Jiang, Jiange [1 ]
Lv, Ning [1 ]
Li, Siyu [1 ]
机构
[1] State Key Laboratory of Integrated Service Networks, Xidian University, Xi'an,710071, China
基金
中国国家自然科学基金;
关键词
Reinforcement learning - Autonomous vehicles - Efficiency - Fuels - Intelligent vehicle highway systems - Intelligent systems - Deep learning;
D O I
暂无
中图分类号
学科分类号
摘要
Recent advancements in Intelligent Transportation Systems suggest that the roads will gradually be filled with autonomous vehicles that are able to drive themselves while communicating with each other and the infrastructure. As a representative driving pattern of autonomous vehicles, the platooning technology has great potential for reducing transport costs by lowering fuel consumption and increasing traffic efficiency. In this paper, to improve the driving efficiency of autonomous vehicular platoon in terms of fuel consumption, a path planning scheme is envisioned using deep reinforcement learning on the network edge node. At first, the system model of autonomous vehicles platooning is given on the common highway. Next, a joint optimization problem is developed considering the task deadline and fuel consumption of each vehicle in the platoon. After that, a path determination strategy employing deep reinforcement learning is designed for the platoon. To make the readers readily follow, a case study is also presented with instantiated parameters. Numerical results shows that our proposed model could significantly reduce the fuel consumption of vehicle platoons while ensuring their task deadlines. © 2013 IEEE.
引用
收藏
页码:99059 / 99069
相关论文
共 50 条
  • [1] An Intelligent Path Planning Scheme of Autonomous Vehicles Platoon Using Deep Reinforcement Learning on Network Edge
    Chen, Chen
    Jiang, Jiange
    Lv, Ning
    Li, Siyu
    IEEE ACCESS, 2020, 8 : 99059 - 99069
  • [2] Advanced planning for autonomous vehicles using reinforcement learning and deep inverse reinforcement learning
    You, Changxi
    Lu, Jianbo
    Filev, Dimitar
    Tsiotras, Panagiotis
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2019, 114 : 1 - 18
  • [3] Obstacle avoidance planning of autonomous vehicles using deep reinforcement learning
    Qian, Yubin
    Feng, Song
    Hu, Wenhao
    Wang, Wanqiu
    ADVANCES IN MECHANICAL ENGINEERING, 2022, 14 (12)
  • [4] Path Planning for Autonomous Vehicles in Unknown Dynamic Environment Based on Deep Reinforcement Learning
    Hu, Hui
    Wang, Yuge
    Tong, Wenjie
    Zhao, Jiao
    Gu, Yulei
    APPLIED SCIENCES-BASEL, 2023, 13 (18):
  • [5] Survey of Deep Reinforcement Learning for Motion Planning of Autonomous Vehicles
    Aradi, Szilard
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (02) : 740 - 759
  • [6] Explainable Deep Reinforcement Learning for UAV autonomous path planning
    He, Lei
    Aouf, Nabil
    Song, Bifeng
    AEROSPACE SCIENCE AND TECHNOLOGY, 2021, 118
  • [7] Path planning of autonomous UAVs using reinforcement learning
    Chronis, Christos
    Anagnostopoulos, Georgios
    Politi, Elena
    Garyfallou, Antonios
    Varlamis, Iraklis
    Dimitrakopoulos, George
    12TH EASN INTERNATIONAL CONFERENCE ON "INNOVATION IN AVIATION & SPACE FOR OPENING NEW HORIZONS", 2023, 2526
  • [8] Object Detection with Deep Neural Networks for Reinforcement Learning in the Task of Autonomous Vehicles Path Planning at the Intersection
    Yudin, D. A.
    Skrynnik, A.
    Krishtopik, A.
    Belkin, I
    Panov, A., I
    OPTICAL MEMORY AND NEURAL NETWORKS, 2019, 28 (04) : 283 - 295
  • [9] Path Planning Based on Deep Reinforcement Learning for Autonomous Underwater Vehicles Under Ocean Current Disturbance
    Chu, Zhenzhong
    Wang, Fulun
    Lei, Tingjun
    Luo, Chaomin
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2023, 8 (01): : 108 - 120
  • [10] Object Detection with Deep Neural Networks for Reinforcement Learning in the Task of Autonomous Vehicles Path Planning at the Intersection
    D. A. Yudin
    A. Skrynnik
    A. Krishtopik
    I. Belkin
    A. I. Panov
    Optical Memory and Neural Networks, 2019, 28 : 283 - 295