A model-based deep reinforcement learning approach to the nonblocking coordination of modular supervisors of discrete event systems

被引:4
|
作者
Yang, Junjun [1 ]
Tan, Kaige [2 ]
Feng, Lei [2 ]
Li, Zhiwu [1 ,3 ]
机构
[1] Xidian Univ, Sch Electromech Engn, Xian 710071, Peoples R China
[2] KTH Royal Inst Technol, Dept Machine Design, S-10044 Stockholm, Sweden
[3] Macau Univ Sci & Technol, Inst Syst Engn, Taipa, Macau, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Deep reinforcement learning; Discrete event system; Local modular control; Supervisory control theory; COMPLEXITY; DESIGN;
D O I
10.1016/j.ins.2023.02.033
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Modular supervisory control may lead to conflicts among the modular supervisors for large-scale discrete event systems. The existing methods for ensuring nonblocking control of modular supervisors either exploit favorable structures in the system model to guarantee the nonblocking property of modular supervisors or employ hierarchical model abstraction methods for reducing the computational complexity of designing a nonblocking coordinator. The nonblocking modular control problem is, in general, NP-hard. This study integrates supervisory control theory and a model-based deep reinforcement learning method to synthesize a nonblocking coordinator for the modular supervisors. The deep reinforcement learning method significantly reduces the computational complexity by avoiding the computation of synchronization of multiple modular supervisors and the plant models. The supervisory control function is approximated by the deep neural network instead of a large-sized finite automaton. Furthermore, the proposed model-based deep reinforcement learning method is more efficient than the standard deep Q network algorithm.
引用
收藏
页码:305 / 321
页数:17
相关论文
共 50 条
  • [31] SOLAR: Deep Structured Representations for Model-Based Reinforcement Learning
    Zhang, Marvin
    Vikram, Sharad
    Smith, Laura
    Abbeel, Pieter
    Johnson, Matthew J.
    Levine, Sergey
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [32] Cloud Reasoning Model-based Exploration for Deep Reinforcement Learning
    Li Chenxi
    Cao Lei
    Chen Xiliang
    Zhang Yongliang
    Xu Zhixiong
    Peng Hui
    Duan Liwen
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2018, 40 (01) : 244 - 248
  • [33] SIMULATION-BASED DEEP REINFORCEMENT LEARNING FOR MODULAR PRODUCTION SYSTEMS
    Feldkamp, Niclas
    Bergmann, Soeren
    Strassburger, Steffen
    2020 WINTER SIMULATION CONFERENCE (WSC), 2020, : 1596 - 1607
  • [34] Nonblocking check in fuzzy discrete event systems based on observation equivalence
    Chen, Xuesong
    Xing, Hongyan
    FUZZY SETS AND SYSTEMS, 2015, 269 : 47 - 64
  • [35] Temporal logic guided safe model-based reinforcement learning: A hybrid systems approach
    Cohen, Max H.
    Serlin, Zachary
    Leahy, Kevin
    Belta, Calin
    NONLINEAR ANALYSIS-HYBRID SYSTEMS, 2023, 47
  • [36] Modular Synthesis of Maximally Permissive Opacity-Enforcing Supervisors for Discrete Event Systems
    Takai, Shigemasa
    Watanabe, Yuta
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 2011, E94A (03) : 1041 - 1044
  • [37] Deep Model-Based Reinforcement Learning for Predictive Control of Robotic Systems with Dense and Sparse Rewards
    Antonyshyn, Luka
    Givigi, Sidney
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2024, 110 (03)
  • [38] Model-based deep reinforcement learning for accelerated learning from flow simulations
    Weiner, Andre
    Geise, Janis
    MECCANICA, 2024,
  • [39] Imitation Game: A Model-based and Imitation Learning Deep Reinforcement Learning Hybrid
    Veith, Eric Msp
    Logemann, Torben
    Berezin, Aleksandr
    Wellssow, Arlena
    Balduin, Stephan
    2024 12TH WORKSHOP ON MODELING AND SIMULATION OF CYBER-PHYSICAL ENERGY SYSTEMS, MSCPES, 2024,
  • [40] Control Approach Combining Reinforcement Learning and Model-Based Control
    Okawa, Yoshihiro
    Sasaki, Tomotake
    Iwane, Hidenao
    2019 12TH ASIAN CONTROL CONFERENCE (ASCC), 2019, : 1419 - 1424