Digital Twin of a Driver-in-the-Loop Race Car Simulation With Contextual Reinforcement Learning

被引:2
|
作者
Ju, Siwei [1 ,2 ]
van Vliet, Peter [1 ]
Arenz, Oleg [2 ]
Peters, Jan [2 ]
机构
[1] Dr Ingn HC F Porsche AG, D-71287 Weissach, Germany
[2] Tech Univ Darmstadt, Comp Sci Dept, Intelligent Autonomous Syst, D-64289 Darmstadt, Germany
关键词
Intelligent vehicles; Behavioral sciences; Reinforcement learning; Predictive models; Digital twins; Vehicle dynamics; Automotive engineering; imitation leaning; autonomous agent; autonomous racing;
D O I
10.1109/LRA.2023.3279618
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
In order to facilitate rapid prototyping and testing in the advanced motorsport industry, we consider the problem of imitating and outperforming professional race car drivers based on demonstrations collected on a high-fidelity Driver-in-the-Loop (DiL) hardware simulator. We formulate a contextual reinforcement learning problem to learn a human-like and stochastic policy with domain-informed choices for states, actions, and reward functions. To leverage very limited training data and build human-like diverse behavior, we fit a probabilistic model to the expert demonstrations called the reference distribution, draw samples out of it, and use them as context for the reinforcement learning agent with context-specific states and rewards. In contrast to the non-human-like stochasticity introduced by Gaussian noise, our method contributes to a more effective exploration, better performance and a policy with human-like variance in evaluation metrics. Compared to previous work using a behavioral cloning agent, which is unable to complete competitive laps robustly, our agent outperforms the professional driver used to collect the demonstrations by around 0.4 seconds per lap on average, which is the first time known to the authors that an autonomous agent has outperformed a top-class professional race driver in a state-of-the-art, high-fidelity simulation. Being robust and sensitive to vehicle setup changes, our agent is able to predict plausible lap time and other performance metrics. Furthermore, unlike traditional lap time calculation methods, our agent indicates not only the gain in performance but also the driveability when faced with modified car balance, facilitating the digital twin of the DiL simulation.
引用
收藏
页码:4107 / 4114
页数:8
相关论文
共 50 条
  • [41] Automated Traffic Signal Performance Measures (ATSPMs) in the Loop Simulation: A Digital Twin Approach
    Khadka, Swastik
    Wang, Peirong
    Li, Pengfei
    Mattingly, Stephen P.
    TRANSPORTATION RESEARCH RECORD, 2024,
  • [42] Digital Twin Enhanced Federated Reinforcement Learning With Lightweight Knowledge Distillation in Mobile Networks
    Zhou, Xiaokang
    Zheng, Xuzhe
    Cui, Xuesong
    Shi, Jiashuai
    Liang, Wei
    Yan, Zheng
    Yang, Laurence T.
    Shimizu, Shohei
    Wang, Kevin I-Kai
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2023, 41 (10) : 3191 - 3211
  • [43] Digital twin-enabled adaptive scheduling strategy based on deep reinforcement learning
    XueMei Gan
    Ying Zuo
    AnSi Zhang
    ShaoBo Li
    Fei Tao
    Science China Technological Sciences, 2023, 66 : 1937 - 1951
  • [44] Digital Twin-Driven Reinforcement Learning for Dynamic Path Planning of AGV Systems
    Lee, Donggun
    Kang, Yong-Shin
    Do Noh, Sang
    Kim, Jaeung
    Kim, Hijun
    ADVANCES IN PRODUCTION MANAGEMENT SYSTEMS-PRODUCTION MANAGEMENT SYSTEMS FOR VOLATILE, UNCERTAIN, COMPLEX, AND AMBIGUOUS ENVIRONMENTS, APMS 2024, PT IV, 2024, 731 : 351 - 365
  • [45] Adaptive Digital Twin and Multiagent Deep Reinforcement Learning for Vehicular Edge Computing and Networks
    Zhang, Ke
    Cao, Jiayu
    Zhang, Yan
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (02) : 1405 - 1413
  • [46] Digital twin-enabled adaptive scheduling strategy based on deep reinforcement learning
    Gan, XueMei
    Zuo, Ying
    Zhang, AnSi
    Li, ShaoBo
    Tao, Fei
    SCIENCE CHINA-TECHNOLOGICAL SCIENCES, 2023, 66 (07) : 1937 - 1951
  • [47] Digital twin-enabled adaptive scheduling strategy based on deep reinforcement learning
    GAN XueMei
    ZUO Ying
    ZHANG AnSi
    LI ShaoBo
    TAO Fei
    Science China(Technological Sciences), 2023, 66 (07) : 1937 - 1951
  • [48] DeepTwin: A Deep Reinforcement Learning Supported Digital Twin Model for Micro-Grids
    Ozkan, Erol
    Kok, Ibrahim
    Ozdemir, Suat
    IEEE ACCESS, 2024, 12 : 196432 - 196441
  • [49] Digital twin-enabled adaptive scheduling strategy based on deep reinforcement learning
    GAN XueMei
    ZUO Ying
    ZHANG AnSi
    LI ShaoBo
    TAO Fei
    Science China(Technological Sciences), 2023, (07) : 1937 - 1951
  • [50] Reinforcement Learning Based Automatic Synchronization Method for Nuclear Power Digital Twin Model
    Xiao, Yunlong
    Liu, Hao
    Zhang, Qing
    Chen, Jingsi
    IEEE ACCESS, 2024, 12 : 87625 - 87632