Reinforcement Twinning: From digital twins to model-based reinforcement learning

被引:0
|
作者
Schena, Lorenzo [1 ,2 ]
Marques, Pedro A. [1 ,3 ]
Poletti, Romain [1 ,4 ]
Van den Berghe, Jan [1 ,5 ]
Mendez, Miguel A. [1 ]
机构
[1] von Karman Inst, B-1640 Rhode St Genese, Belgium
[2] Vrije Univ Brussel VUB, Dept Mech Engn, B-1050 Brussels, Belgium
[3] Univ Libre Bruxelles, Ave Franklin Roosevelt 50, B-1050 Brussels, Belgium
[4] Univ Ghent, Sint Pietersnieuwstr 41, B-9000 Ghent, Belgium
[5] Catholic Univ Louvain, Inst Mech Mat & Civil Engn iMMC, B-1348 Louvain La Neuve, Belgium
关键词
Digital twins; System identification; Reinforcement learning; Adjoint-based assimilation; NONLINEAR-SYSTEM IDENTIFICATION; NEURAL-NETWORKS; WIND TURBINE; DATA ASSIMILATION; PRESSURE CONTROL; DESIGN; TUTORIAL; DYNAMICS; ROTATION; ENERGY;
D O I
10.1016/j.jocs.2024.102421
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The concept of digital twins promises to revolutionize engineering by offering new avenues for optimization, control, and predictive maintenance. We propose a novel framework for simultaneously training the digital twin of an engineering system and an associated control agent. The training of the twin combines methods from adjoint-based data assimilation and system identification, while the training of the control agent combines model-based optimal control and model-free reinforcement learning. The training of the control agent is achieved by letting it evolve independently along two paths: one driven by a model-based optimal control and another driven by reinforcement learning. The virtual environment offered by the digital twin is used as a playground for confrontation and indirect interaction. This interaction occurs as an "expert demonstrator", where the best policy is selected for the interaction with the real environment and "cloned" to the other if the independent training stagnates. We refer to this framework as Reinforcement Twinning (RT). The framework is tested on three vastly different engineering systems and control tasks, namely (1) the control of a wind turbine subject to time-varying wind speed, (2) the trajectory control of flapping-wing micro air vehicles (FWMAVs) subject to wind gusts, and (3) the mitigation of thermal loads in the management of cryogenic storage tanks. The test cases are implemented using simplified models for which the ground truth on the closure law is available. The results show that the adjoint-based training of the digital twin is remarkably sample-efficient and completed within a few iterations. Concerning the control agent training, the results show that the model- based and the model-free control training benefit from the learning experience and the complementary learning approach of each other. The encouraging results open the path towards implementing the RT framework on real systems.
引用
收藏
页数:28
相关论文
共 50 条
  • [1] Model-based Reinforcement Learning: A Survey
    Moerland, Thomas M.
    Broekens, Joost
    Plaat, Aske
    Jonker, Catholijn M.
    FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2023, 16 (01): : 1 - 118
  • [2] A survey on model-based reinforcement learning
    Fan-Ming LUO
    Tian XU
    Hang LAI
    Xiong-Hui CHEN
    Weinan ZHANG
    Yang YU
    Science China(Information Sciences), 2024, 67 (02) : 59 - 84
  • [3] Nonparametric model-based reinforcement learning
    Atkeson, CG
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 10, 1998, 10 : 1008 - 1014
  • [4] The ubiquity of model-based reinforcement learning
    Doll, Bradley B.
    Simon, Dylan A.
    Daw, Nathaniel D.
    CURRENT OPINION IN NEUROBIOLOGY, 2012, 22 (06) : 1075 - 1081
  • [5] Multiple model-based reinforcement learning
    Doya, K
    Samejima, K
    Katagiri, K
    Kawato, M
    NEURAL COMPUTATION, 2002, 14 (06) : 1347 - 1369
  • [6] A survey on model-based reinforcement learning
    Luo, Fan-Ming
    Xu, Tian
    Lai, Hang
    Chen, Xiong-Hui
    Zhang, Weinan
    Yu, Yang
    SCIENCE CHINA-INFORMATION SCIENCES, 2024, 67 (02)
  • [7] Model-based reinforcement learning under concurrent schedules of reinforcement in rodents
    Huh, Namjung
    Jo, Suhyun
    Kim, Hoseok
    Sul, Jung Hoon
    Jung, Min Whan
    LEARNING & MEMORY, 2009, 16 (05) : 315 - 323
  • [8] Model-Based Inverse Reinforcement Learning from Visual Demonstrations
    Das, Neha
    Bechtle, Sarah
    Davchev, Todor
    Jayaraman, Dinesh
    Rai, Akshara
    Meier, Franziska
    CONFERENCE ON ROBOT LEARNING, VOL 155, 2020, 155 : 1930 - 1942
  • [9] Learning to Paint With Model-based Deep Reinforcement Learning
    Huang, Zhewei
    Heng, Wen
    Zhou, Shuchang
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 8708 - 8717
  • [10] Consistency of Fuzzy Model-Based Reinforcement Learning
    Busoniu, Lucian
    Ernst, Damien
    De Schutter, Bart
    Babuska, Robert
    2008 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, VOLS 1-5, 2008, : 518 - +