Digital Twin of a Driver-in-the-Loop Race Car Simulation With Contextual Reinforcement Learning

被引:2
|
作者
Ju, Siwei [1 ,2 ]
van Vliet, Peter [1 ]
Arenz, Oleg [2 ]
Peters, Jan [2 ]
机构
[1] Dr Ingn HC F Porsche AG, D-71287 Weissach, Germany
[2] Tech Univ Darmstadt, Comp Sci Dept, Intelligent Autonomous Syst, D-64289 Darmstadt, Germany
关键词
Intelligent vehicles; Behavioral sciences; Reinforcement learning; Predictive models; Digital twins; Vehicle dynamics; Automotive engineering; imitation leaning; autonomous agent; autonomous racing;
D O I
10.1109/LRA.2023.3279618
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
In order to facilitate rapid prototyping and testing in the advanced motorsport industry, we consider the problem of imitating and outperforming professional race car drivers based on demonstrations collected on a high-fidelity Driver-in-the-Loop (DiL) hardware simulator. We formulate a contextual reinforcement learning problem to learn a human-like and stochastic policy with domain-informed choices for states, actions, and reward functions. To leverage very limited training data and build human-like diverse behavior, we fit a probabilistic model to the expert demonstrations called the reference distribution, draw samples out of it, and use them as context for the reinforcement learning agent with context-specific states and rewards. In contrast to the non-human-like stochasticity introduced by Gaussian noise, our method contributes to a more effective exploration, better performance and a policy with human-like variance in evaluation metrics. Compared to previous work using a behavioral cloning agent, which is unable to complete competitive laps robustly, our agent outperforms the professional driver used to collect the demonstrations by around 0.4 seconds per lap on average, which is the first time known to the authors that an autonomous agent has outperformed a top-class professional race driver in a state-of-the-art, high-fidelity simulation. Being robust and sensitive to vehicle setup changes, our agent is able to predict plausible lap time and other performance metrics. Furthermore, unlike traditional lap time calculation methods, our agent indicates not only the gain in performance but also the driveability when faced with modified car balance, facilitating the digital twin of the DiL simulation.
引用
收藏
页码:4107 / 4114
页数:8
相关论文
共 50 条
  • [21] Inference of Simulation Models in Digital Twins by Reinforcement Learning
    David, Istvan
    Galasso, Jessie
    Syriani, Eugene
    24TH ACM/IEEE INTERNATIONAL CONFERENCE ON MODEL-DRIVEN ENGINEERING LANGUAGES AND SYSTEMS COMPANION (MODELS-C 2021), 2021, : 223 - 226
  • [22] Optimizing Federated Learning With Deep Reinforcement Learning for Digital Twin Empowered Industrial IoT
    Yang, Wei
    Xiang, Wei
    Yang, Yuan
    Cheng, Peng
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (02) : 1884 - 1893
  • [23] Autonomous Car Racing in Simulation Environment Using Deep Reinforcement Learning
    Guckiran, Kivanc
    Bolat, Bulent
    2019 INNOVATIONS IN INTELLIGENT SYSTEMS AND APPLICATIONS CONFERENCE (ASYU), 2019, : 329 - 334
  • [24] Boosting Race Car Performance through Reinforcement Learning from AI Feedback (RLAIF)
    Nagura, Daniel
    Bihl, Trevor
    Liu, Jundong
    IEEE NATIONAL AEROSPACE AND ELECTRONICS CONFERENCE, NAECON 2024, 2024, : 274 - 278
  • [25] Reinforcement learning and digital twin-driven optimization of production scheduling with the digital model playground
    Seipolt, Arne
    Buschermöhle, Ralf
    Haag, Vladislav
    Hasselbring, Wilhelm
    Höfinghoff, Maximilian
    Schumacher, Marcel
    Wilbers, Henrik
    Discover Internet of Things, 2024, 4 (01):
  • [26] Analysis of a closed-loop digital twin using discrete event simulation
    Andrew Eyring
    Nathan Hoyt
    Joe Tenny
    Reuben Domike
    Yuri Hovanski
    The International Journal of Advanced Manufacturing Technology, 2022, 123 : 245 - 258
  • [27] Analysis of a closed-loop digital twin using discrete event simulation
    Eyring, Andrew
    Hoyt, Nathan
    Tenny, Joe
    Domike, Reuben
    Hovanski, Yuri
    INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY, 2022, 123 (1-2): : 245 - 258
  • [28] Accelerating Deep Reinforcement Learning for Digital Twin Network Optimization with Evolutionary Strategies
    Guemes-Palau, Carlos
    Almasan, Paul
    Xiao, Shihan
    Cheng, Xiangle
    Shi, Xiang
    Barlet-Ros, Pere
    Cabellos-Aparicio, Albert
    PROCEEDINGS OF THE IEEE/IFIP NETWORK OPERATIONS AND MANAGEMENT SYMPOSIUM 2022, 2022,
  • [29] Smart scheduling of hanging workshop via digital twin and deep reinforcement learning
    Pan, Jianguo
    Zhong, Ruirui
    Hu, Bingtao
    Feng, Yixiong
    Zhang, Zhifeng
    Tan, Jianrong
    FLEXIBLE SERVICES AND MANUFACTURING JOURNAL, 2024, : 157 - 178
  • [30] Federated Deep Reinforcement Learning for Task Offloading in Digital Twin Edge Networks
    Dai, Yueyue
    Zhao, Jintang
    Zhang, Jing
    Zhang, Yan
    Jiang, Tao
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2024, 11 (03): : 2849 - 2863