Reinforcement Twinning: From digital twins to model-based reinforcement learning

被引:0
|
作者
Schena, Lorenzo [1 ,2 ]
Marques, Pedro A. [1 ,3 ]
Poletti, Romain [1 ,4 ]
Van den Berghe, Jan [1 ,5 ]
Mendez, Miguel A. [1 ]
机构
[1] von Karman Inst, B-1640 Rhode St Genese, Belgium
[2] Vrije Univ Brussel VUB, Dept Mech Engn, B-1050 Brussels, Belgium
[3] Univ Libre Bruxelles, Ave Franklin Roosevelt 50, B-1050 Brussels, Belgium
[4] Univ Ghent, Sint Pietersnieuwstr 41, B-9000 Ghent, Belgium
[5] Catholic Univ Louvain, Inst Mech Mat & Civil Engn iMMC, B-1348 Louvain La Neuve, Belgium
关键词
Digital twins; System identification; Reinforcement learning; Adjoint-based assimilation; NONLINEAR-SYSTEM IDENTIFICATION; NEURAL-NETWORKS; WIND TURBINE; DATA ASSIMILATION; PRESSURE CONTROL; DESIGN; TUTORIAL; DYNAMICS; ROTATION; ENERGY;
D O I
10.1016/j.jocs.2024.102421
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The concept of digital twins promises to revolutionize engineering by offering new avenues for optimization, control, and predictive maintenance. We propose a novel framework for simultaneously training the digital twin of an engineering system and an associated control agent. The training of the twin combines methods from adjoint-based data assimilation and system identification, while the training of the control agent combines model-based optimal control and model-free reinforcement learning. The training of the control agent is achieved by letting it evolve independently along two paths: one driven by a model-based optimal control and another driven by reinforcement learning. The virtual environment offered by the digital twin is used as a playground for confrontation and indirect interaction. This interaction occurs as an "expert demonstrator", where the best policy is selected for the interaction with the real environment and "cloned" to the other if the independent training stagnates. We refer to this framework as Reinforcement Twinning (RT). The framework is tested on three vastly different engineering systems and control tasks, namely (1) the control of a wind turbine subject to time-varying wind speed, (2) the trajectory control of flapping-wing micro air vehicles (FWMAVs) subject to wind gusts, and (3) the mitigation of thermal loads in the management of cryogenic storage tanks. The test cases are implemented using simplified models for which the ground truth on the closure law is available. The results show that the adjoint-based training of the digital twin is remarkably sample-efficient and completed within a few iterations. Concerning the control agent training, the results show that the model- based and the model-free control training benefit from the learning experience and the complementary learning approach of each other. The encouraging results open the path towards implementing the RT framework on real systems.
引用
收藏
页数:28
相关论文
共 50 条
  • [21] MOReL: Model-Based Offline Reinforcement Learning
    Kidambi, Rahul
    Rajeswaran, Aravind
    Netrapalli, Praneeth
    Joachims, Thorsten
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [22] Modeling Survival in model-based Reinforcement Learning
    Moazami, Saeed
    Doerschuk, Peggy
    2020 SECOND INTERNATIONAL CONFERENCE ON TRANSDISCIPLINARY AI (TRANSAI 2020), 2020, : 17 - 24
  • [23] Model-Based Reinforcement Learning With Isolated Imaginations
    Pan, Minting
    Zhu, Xiangming
    Zheng, Yitao
    Wang, Yunbo
    Yang, Xiaokang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (05) : 2788 - 2803
  • [24] Model-based average reward reinforcement learning
    Tadepalli, P
    Ok, D
    ARTIFICIAL INTELLIGENCE, 1998, 100 (1-2) : 177 - 224
  • [25] Model-Based Reinforcement Learning in Robotics: A Survey
    Sun S.
    Lan X.
    Zhang H.
    Zheng N.
    Moshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence, 2022, 35 (01): : 1 - 16
  • [26] Continual Model-Based Reinforcement Learning with Hypernetworks
    Huang, Yizhou
    Xie, Kevin
    Bharadhwaj, Homanga
    Shkurti, Florian
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 799 - 805
  • [27] Adaptive Discretization for Model-Based Reinforcement Learning
    Sinclair, Sean R.
    Wang, Tianyu
    Jain, Gauri
    Banerjee, Siddhartha
    Yu, Christina Lee
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020), 2020, 33
  • [28] A comparison of direct and model-based reinforcement learning
    Atkeson, CG
    Santamaria, JC
    1997 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION - PROCEEDINGS, VOLS 1-4, 1997, : 3557 - 3564
  • [29] Model-based Reinforcement Learning and the Eluder Dimension
    Osband, Ian
    Van Roy, Benjamin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 27 (NIPS 2014), 2014, 27
  • [30] Online Constrained Model-based Reinforcement Learning
    van Niekerk, Benjamin
    Damianou, Andreas
    Rosman, Benjamin
    CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI2017), 2017,