A Selective Federated Reinforcement Learning Strategy for Autonomous Driving

被引:27
|
作者
Fu, Yuchuan [1 ,2 ]
Li, Changle [1 ,2 ]
Yu, F. Richard [3 ]
Luan, Tom H. [4 ]
Zhang, Yao [5 ]
机构
[1] Xidian Univ, State Key Lab Integrated Serv Networks, Xian 710071, Shaanxi, Peoples R China
[2] Xidian Univ, Res Inst Smart Transportat, Xian 710071, Shaanxi, Peoples R China
[3] Carleton Univ, Dept Syst & Comp Engn, Ottawa, ON K1S 5B6, Canada
[4] Xidian Univ, Sch Cyber Engn, Xian 710071, Shaanxi, Peoples R China
[5] Northwestern Polytech Univ, Sch Comp Sci, Xian 710072, Peoples R China
基金
中国国家自然科学基金;
关键词
Autonomous vehicles; Adaptation models; Reinforcement learning; Computational modeling; Data models; Training; Task analysis; Networked autonomous driving; federated learning; knowledge aggregation; WIRELESS NETWORKS; BLOCKCHAIN; MODEL; DESIGN;
D O I
10.1109/TITS.2022.3219644
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Currently, the complex traffic environment challenges the fast and accurate response of a connected autonomous vehicle (CAV). More importantly, it is difficult for different CAVs to collaborate and share knowledge. To remedy that, this paper proposes a selective federated reinforcement learning (SFRL) strategy to achieve online knowledge aggregation strategy to improve the accuracy and environmental adaptability of the autonomous driving model. First, we propose a federated reinforcement learning framework that allows participants to use the knowledge of other CAVs to make corresponding actions, thereby realizing online knowledge transfer and aggregation. Second, we use reinforcement learning to train local driving models of CAVs to cope with collision avoidance tasks. Third, considering the efficiency of federated learning (FL) and the additional communication overhead it brings, we propose a CAVs selection strategy before uploading local models. When selecting CAVs, we consider the reputation of CAVs, the quality of local models, and time overhead, so as to select as many high-quality users as possible while considering resources and time constraints. With above strategic processes, our framework can aggregate and reuse the knowledge learned by CAVs traveling in different environments to assist in driving decisions. Extensive simulation results validate that our proposal can improve model accuracy and learning efficiency while reducing communication overhead.
引用
收藏
页码:1655 / 1668
页数:14
相关论文
共 50 条
  • [41] Autonomous driving at the handling limit using residual reinforcement learning
    Hou, Xiaohui
    Zhang, Junzhi
    He, Chengkun
    Ji, Yuan
    Zhang, Junfeng
    Han, Jinheng
    [J]. ADVANCED ENGINEERING INFORMATICS, 2022, 54
  • [42] Autonomous Vehicle Driving Path Control with Deep Reinforcement Learning
    Tiong, Teckchai
    Saad, Ismail
    Teo, Kenneth Tze Kin
    bin Lago, Herwansyah
    [J]. 2023 IEEE 13TH ANNUAL COMPUTING AND COMMUNICATION WORKSHOP AND CONFERENCE, CCWC, 2023, : 84 - 92
  • [43] Safe Reinforcement Learning in Autonomous Driving With Epistemic Uncertainty Estimation
    Zhang, Zheng
    Liu, Qi
    Li, Yanjie
    Lin, Ke
    Li, Linyu
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024,
  • [44] Test Scenario Generation for Autonomous Driving Systems with Reinforcement Learning
    Lu, Chengjie
    [J]. 2023 IEEE/ACM 45TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: COMPANION PROCEEDINGS, ICSE-COMPANION, 2023, : 317 - 319
  • [45] Personalized Car Following for Autonomous Driving with Inverse Reinforcement Learning
    Zhao, Zhouqiao
    Wang, Ziran
    Han, Kyungtae
    Gupta, Rohit
    Tiwari, Prashant
    Wu, Guoyuan
    Barth, Matthew J.
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2022), 2022, : 2891 - 2897
  • [46] Identify, Estimate and Bound the Uncertainty of Reinforcement Learning for Autonomous Driving
    Zhou, Weitao
    Cao, Zhong
    Deng, Nanshan
    Jiang, Kun
    Yang, Diange
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (08) : 7932 - 7942
  • [47] Deep Hierarchical Reinforcement Learning for Autonomous Driving with Distinct Behaviors
    Chen, Jianyu
    Wang, Zining
    Tomizuka, Masayoshi
    [J]. 2018 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2018, : 1239 - 1244
  • [48] A Selective Defense Strategy for Federated Learning Against Attacks
    Chen, Zhuo
    Jiang, Hui
    Zhou, Yang
    [J]. Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2024, 46 (03): : 1119 - 1127
  • [49] Enhancing Autonomous Driving With Spatial Memory and Attention in Reinforcement Learning
    Gerasyov, Matvey
    Savchenko, Andrey V.
    Makarov, Ilya
    [J]. IEEE Access, 2024, 12 : 173316 - 173324
  • [50] Deep Reinforcement Learning for Autonomous Driving by Transferring Visual Features
    Zhou, Hongli
    Chen, Xiaolei
    Zhang, Guanwen
    Zhou, Wei
    [J]. 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 4436 - 4441