Adaptive manufacturing control with Deep Reinforcement Learning for dynamic WIP management in industry 4.0

被引:0
|
作者
Vespoli, Silvestro [1 ]
Mattera, Giulio [1 ]
Marchesano, Maria Grazia [1 ]
Nele, Luigi [1 ]
Guizzi, Guido [1 ]
机构
[1] Univ Naples Federico II, Dept Chem Mat & Ind Prod Engn, Piazzale Tecchio 80, I-80125 Naples, NA, Italy
关键词
Smart production systems; Adaptive manufacturing control; Industry; 4.0; Hybrid semi-heterarchical architecture; Deep Reinforcement Learning; CONWIP; AGENT; ARCHITECTURES; SYSTEMS;
D O I
10.1016/j.cie.2025.110966
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In the context of Industry 4.0, manufacturing systems face increased complexity and uncertainty due to elevated product customisation and demand variability. This paper presents a novel framework for adaptive Work-In-Progress (WIP) control in semi-heterarchical architectures, addressing the limitations of traditional analytical methods that rely on exponential processing time distributions. Integrating Deep Reinforcement Learning (DRL) with Discrete Event Simulation (DES) enables model-free control of flow-shop production systems under non-exponential, stochastic processing times. A Deep Q-Network (DQN) agent dynamically manages WIP levels in a CONstant Work In Progress (CONWIP) environment, learning optimal control policies directly from system interactions. The framework's effectiveness is demonstrated through extensive experiments with varying machine numbers, processing times, and system variability. The results show robust performance in tracking the target throughput and adapting the processing time variability, achieving Mean Absolute Percentual Errors (MAPE) in the throughput - calculated as the percentage difference between the actual and the target throughput - ranging from 0.3% to 2.3% with standard deviations of 5. 5% to 8. 4%. Key contributions include the development of a data-driven WIP control approach to overcome analytical methods' limitations in stochastic environments, validating DQN agent adaptability across varying production scenarios, and demonstrating framework scalability in realistic manufacturing settings. This research bridges the gap between conventional WIP control methods and Industry 4.0 requirements, offering manufacturers an adaptive solution for enhanced production efficiency.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] Adaptive Control for Building Energy Management Using Reinforcement Learning
    Eller, Lukas
    Siafara, Lydia C.
    Sauter, Thilo
    2018 IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL TECHNOLOGY (ICIT), 2018, : 1562 - 1567
  • [42] Data-driven Adaptive Network Management with Deep Reinforcement Learning
    Ivoghlian, Ameer
    Wang, Kevin I-Kai
    Salcic, Zoran
    2021 IEEE INTL CONF ON DEPENDABLE, AUTONOMIC AND SECURE COMPUTING, INTL CONF ON PERVASIVE INTELLIGENCE AND COMPUTING, INTL CONF ON CLOUD AND BIG DATA COMPUTING, INTL CONF ON CYBER SCIENCE AND TECHNOLOGY CONGRESS DASC/PICOM/CBDCOM/CYBERSCITECH 2021, 2021, : 153 - 160
  • [43] Toward Adaptive Manufacturing: Scheduling Problems in the Context of Industry 4.0
    Nahhas, Abdulrahman
    Lang, Sebastian
    Bosse, Sascha
    Turowski, Klaus
    2018 SIXTH INTERNATIONAL CONFERENCE ON ENTERPRISE SYSTEMS (ES 2018), 2018, : 108 - 115
  • [44] Deep reinforcement learning for home energy management system control
    Lissa, Paulo
    Deane, Conor
    Schukat, Michael
    Seri, Federico
    Keane, Marcus
    Barrett, Enda
    ENERGY AND AI, 2021, 3
  • [45] Adaptive traffic light control using deep reinforcement learning technique
    Kumar, Ritesh
    Sharma, Nistala Venkata Kameshwer
    Chaurasiya, Vijay K.
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (05) : 13851 - 13872
  • [46] Adaptive traffic light control using deep reinforcement learning technique
    Ritesh Kumar
    Nistala Venkata Kameshwer Sharma
    Vijay K. Chaurasiya
    Multimedia Tools and Applications, 2024, 83 : 13851 - 13872
  • [47] Adaptive Power System Emergency Control Using Deep Reinforcement Learning
    Huang, Qiuhua
    Huang, Renke
    Hao, Weituo
    Tan, Jie
    Fan, Rui
    Huang, Zhenyu
    IEEE TRANSACTIONS ON SMART GRID, 2020, 11 (02) : 1171 - 1182
  • [48] Autonomous UAV Navigation with Adaptive Control Based on Deep Reinforcement Learning
    Yin, Yongfeng
    Wang, Zhetao
    Zheng, Lili
    Su, Qingran
    Guo, Yang
    ELECTRONICS, 2024, 13 (13)
  • [49] Deep Reinforcement Learning based Adaptive Transmission Control in Vehicular Networks
    Liu, Mingyuan
    Quan, Wei
    Yu, Chengxiao
    Zhang, Xue
    Gao, Deyun
    2021 IEEE 94TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2021-FALL), 2021,
  • [50] An effective deep reinforcement learning approach for adaptive traffic signal control
    Yu, Mingrui
    Chai, Jaijun
    Lv, Yisheng
    Xiong, Gang
    2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 6419 - 6425