Reinforcement Twinning: From digital twins to model-based reinforcement learning

被引:0
|
作者
Schena, Lorenzo [1 ,2 ]
Marques, Pedro A. [1 ,3 ]
Poletti, Romain [1 ,4 ]
Van den Berghe, Jan [1 ,5 ]
Mendez, Miguel A. [1 ]
机构
[1] von Karman Inst, B-1640 Rhode St Genese, Belgium
[2] Vrije Univ Brussel VUB, Dept Mech Engn, B-1050 Brussels, Belgium
[3] Univ Libre Bruxelles, Ave Franklin Roosevelt 50, B-1050 Brussels, Belgium
[4] Univ Ghent, Sint Pietersnieuwstr 41, B-9000 Ghent, Belgium
[5] Catholic Univ Louvain, Inst Mech Mat & Civil Engn iMMC, B-1348 Louvain La Neuve, Belgium
关键词
Digital twins; System identification; Reinforcement learning; Adjoint-based assimilation; NONLINEAR-SYSTEM IDENTIFICATION; NEURAL-NETWORKS; WIND TURBINE; DATA ASSIMILATION; PRESSURE CONTROL; DESIGN; TUTORIAL; DYNAMICS; ROTATION; ENERGY;
D O I
10.1016/j.jocs.2024.102421
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The concept of digital twins promises to revolutionize engineering by offering new avenues for optimization, control, and predictive maintenance. We propose a novel framework for simultaneously training the digital twin of an engineering system and an associated control agent. The training of the twin combines methods from adjoint-based data assimilation and system identification, while the training of the control agent combines model-based optimal control and model-free reinforcement learning. The training of the control agent is achieved by letting it evolve independently along two paths: one driven by a model-based optimal control and another driven by reinforcement learning. The virtual environment offered by the digital twin is used as a playground for confrontation and indirect interaction. This interaction occurs as an "expert demonstrator", where the best policy is selected for the interaction with the real environment and "cloned" to the other if the independent training stagnates. We refer to this framework as Reinforcement Twinning (RT). The framework is tested on three vastly different engineering systems and control tasks, namely (1) the control of a wind turbine subject to time-varying wind speed, (2) the trajectory control of flapping-wing micro air vehicles (FWMAVs) subject to wind gusts, and (3) the mitigation of thermal loads in the management of cryogenic storage tanks. The test cases are implemented using simplified models for which the ground truth on the closure law is available. The results show that the adjoint-based training of the digital twin is remarkably sample-efficient and completed within a few iterations. Concerning the control agent training, the results show that the model- based and the model-free control training benefit from the learning experience and the complementary learning approach of each other. The encouraging results open the path towards implementing the RT framework on real systems.
引用
收藏
页数:28
相关论文
共 50 条
  • [41] On the Importance of Hyperparameter Optimization for Model-based Reinforcement Learning
    Zhang, Baohe
    Rajan, Raghu
    Pineda, Luis
    Lambert, Nathan
    Biedenkapp, Andre
    Chua, Kurtland
    Hutter, Frank
    Calandra, Roberto
    24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130
  • [42] Model-based Adversarial Meta-Reinforcement Learning
    Lin, Zichuan
    Thomas, Garrett
    Yang, Guangwen
    Ma, Tengyu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [43] Safe Model-based Reinforcement Learning with Stability Guarantees
    Berkenkamp, Felix
    Turchetta, Matteo
    Schoellig, Angela P.
    Krause, Andreas
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [44] A Brief Survey of Model-Based Reinforcement Learning Techniques
    Pal, Constantin-Valentin
    Leon, Florin
    2020 24TH INTERNATIONAL CONFERENCE ON SYSTEM THEORY, CONTROL AND COMPUTING (ICSTCC), 2020, : 92 - 97
  • [45] Model-based reinforcement learning for approximate optimal regulation
    Kamalapurkar, Rushikesh
    Walters, Patrick
    Dixon, Warren E.
    AUTOMATICA, 2016, 64 : 94 - 104
  • [46] Model-based Bayesian Reinforcement Learning for Dialogue Management
    Lison, Pierre
    14TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2013), VOLS 1-5, 2013, : 475 - 479
  • [47] The Value Equivalence Principle for Model-Based Reinforcement Learning
    Grimm, Christopher
    Barreto, Andre
    Singh, Satinder
    Silver, David
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [48] An Efficient Approach to Model-Based Hierarchical Reinforcement Learning
    Li, Zhuoru
    Narayan, Akshay
    Leong, Tze-Yun
    THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 3583 - 3589
  • [49] Continuous-Time Model-Based Reinforcement Learning
    Yildiz, Cagatay
    Heinonen, Markus
    Lahdesmaki, Harri
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [50] Model-based Lifelong Reinforcement Learning with Bayesian Exploration
    Fu, Haotian
    Yu, Shangqun
    Littman, Michael
    Konidaris, George
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,