Comfortable driving control for connected automated vehicles based on deep reinforcement learning and knowledge transfer

被引:0
|
作者
Wu, Chuna [1 ,2 ]
Chen, Jing [3 ]
Yao, Jinqiang [4 ]
Chen, Tianyi [4 ]
Cao, Jing [5 ]
Zhao, Cong [3 ]
机构
[1] Minist Transport, Key Lab MOT Operat Safety Technol Transport Vehicl, Res Inst Highway, Beijing, Peoples R China
[2] Minist Transport, Automot Transportat Res Ctr, Res Inst Highway, Beijing, Peoples R China
[3] Tongji Univ, Key Lab Rd & Traff Engn, Minist Educ, Shanghai 201804, Peoples R China
[4] Zhejiang Commun Investment Grp Co Ltd, ITS Branch, Hangzhou, Peoples R China
[5] Soc Automot Engineers China, Ctr Automot Intelligence & Future Mobil, Beijing, Peoples R China
关键词
automated driving and intelligent vehicles; intelligent control; SEMIACTIVE SUSPENSION SYSTEMS; HYBRID ELECTRIC VEHICLE; PREVIEW; MODEL;
D O I
10.1049/itr2.12540
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
With the development of connected automated vehicles (CAVs), preview and large-scale road profile information detected by different vehicles become available for speed planning and active suspension control of CAVs to enhance ride comfort. Existing methods are not well adapted to rough pavements of different districts, where the distributions of road roughness are significantly different because of the traffic volume, maintenance, weather, etc. This study proposes a comfortable driving framework by coordinating speed planning and suspension control with knowledge transfer. Based on existing speed planning approaches, a deep reinforcement learning (DRL) algorithm is designed to learn comfortable suspension control strategies with preview road and speed information. Fine-tuning and lateral connection are adopted to transfer the learned knowledge for adaptability in different districts. DRL-based suspension control models are trained and transferred using real-world rough pavement data in districts of Shanghai, China. The experimental results show that the proposed control method increases vertical comfort by 41.10% on rough pavements, compared to model predictive control. The proposed framework is proven to be applicable to stochastic rough pavements for CAVs. This study proposes a comfortable driving framework by coordinating speed planning and suspension control with knowledge transfer. Based on existing speed planning approaches, a deep reinforcement learning (DRL) algorithm is designed to learn comfortable suspension control strategies with preview road and speed information. Fine-tuning and progressive networks are adopted to transfer the learned knowledge for adaptability in different districts. image
引用
收藏
页数:15
相关论文
共 50 条
  • [41] Learning to falsify automated driving vehicles with prior knowledge
    Favrin, Andrea
    Nenchev, Vladislav
    Cenedese, Angelo
    IFAC PAPERSONLINE, 2020, 53 (02): : 15122 - 15127
  • [42] Deep reinforcement learning based control for Autonomous Vehicles in CARLA
    Perez-Gil, Oscar
    Barea, Rafael
    Lopez-Guillen, Elena
    Bergasa, Luis M.
    Gomez-Huelamo, Carlos
    Gutierrez, Rodrigo
    Diaz-Diaz, Alejandro
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (03) : 3553 - 3576
  • [43] A Control Strategy of Autonomous Vehicles based on Deep Reinforcement Learning
    Xia, Wei
    Li, Huiyun
    Li, Baopu
    PROCEEDINGS OF 2016 9TH INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN (ISCID), VOL 2, 2016, : 198 - 201
  • [44] Deep reinforcement learning based control for Autonomous Vehicles in CARLA
    Óscar Pérez-Gil
    Rafael Barea
    Elena López-Guillén
    Luis M. Bergasa
    Carlos Gómez-Huélamo
    Rodrigo Gutiérrez
    Alejandro Díaz-Díaz
    Multimedia Tools and Applications, 2022, 81 : 3553 - 3576
  • [45] Deep reinforcement-learning-based driving policy for autonomous road vehicles
    Makantasis, Konstantinos
    Kontorinaki, Maria
    Nikolos, Ioannis
    IET INTELLIGENT TRANSPORT SYSTEMS, 2020, 14 (01) : 13 - 24
  • [46] A deep reinforcement learning-based distributed connected automated vehicle control under communication failure
    Shi, Haotian
    Zhou, Yang
    Wang, Xin
    Fu, Sicheng
    Gong, Siyuan
    Ran, Bin
    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, 2022, 37 (15) : 2033 - 2051
  • [47] Review on eco-driving control for connected and automated vehicles
    Li, Jie
    Fotouhi, Abbas
    Liu, Yonggang
    Zhang, Yuanjian
    Chen, Zheng
    RENEWABLE & SUSTAINABLE ENERGY REVIEWS, 2024, 189
  • [48] Application of reinforcement learning to adaptive control of connected vehicles
    Ichikawa, Ikumi
    Ushio, Toshimitsu
    IEICE NONLINEAR THEORY AND ITS APPLICATIONS, 2019, 10 (04): : 443 - 454
  • [49] Cooperative On-Ramp Merging Control of Connected and Automated Vehicles: Distributed Multi-Agent Deep Reinforcement Learning Approach
    Zhou, Shanxing
    Zhuang, Weichao
    Yin, Guodong
    Liu, Haoji
    Qiu, Chunlong
    2022 IEEE 25TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2022, : 402 - 408
  • [50] Deep Reinforcement Learning in Lane Merge Coordination for Connected Vehicles
    Nassef, Omar
    Sequeira, Luis
    Salam, Elias
    Mahmoodi, Toktam
    2020 IEEE 31ST ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS (IEEE PIMRC), 2020,