Adaptive manufacturing control with Deep Reinforcement Learning for dynamic WIP management in industry 4.0

被引:0
|
作者
Vespoli, Silvestro [1 ]
Mattera, Giulio [1 ]
Marchesano, Maria Grazia [1 ]
Nele, Luigi [1 ]
Guizzi, Guido [1 ]
机构
[1] Univ Naples Federico II, Dept Chem Mat & Ind Prod Engn, Piazzale Tecchio 80, I-80125 Naples, NA, Italy
关键词
Smart production systems; Adaptive manufacturing control; Industry; 4.0; Hybrid semi-heterarchical architecture; Deep Reinforcement Learning; CONWIP; AGENT; ARCHITECTURES; SYSTEMS;
D O I
10.1016/j.cie.2025.110966
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In the context of Industry 4.0, manufacturing systems face increased complexity and uncertainty due to elevated product customisation and demand variability. This paper presents a novel framework for adaptive Work-In-Progress (WIP) control in semi-heterarchical architectures, addressing the limitations of traditional analytical methods that rely on exponential processing time distributions. Integrating Deep Reinforcement Learning (DRL) with Discrete Event Simulation (DES) enables model-free control of flow-shop production systems under non-exponential, stochastic processing times. A Deep Q-Network (DQN) agent dynamically manages WIP levels in a CONstant Work In Progress (CONWIP) environment, learning optimal control policies directly from system interactions. The framework's effectiveness is demonstrated through extensive experiments with varying machine numbers, processing times, and system variability. The results show robust performance in tracking the target throughput and adapting the processing time variability, achieving Mean Absolute Percentual Errors (MAPE) in the throughput - calculated as the percentage difference between the actual and the target throughput - ranging from 0.3% to 2.3% with standard deviations of 5. 5% to 8. 4%. Key contributions include the development of a data-driven WIP control approach to overcome analytical methods' limitations in stochastic environments, validating DQN agent adaptability across varying production scenarios, and demonstrating framework scalability in realistic manufacturing settings. This research bridges the gap between conventional WIP control methods and Industry 4.0 requirements, offering manufacturers an adaptive solution for enhanced production efficiency.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] A Review of Deep Reinforcement Learning Approaches for Smart Manufacturing in Industry 4.0 and 5.0 Framework
    del Real Torres, Alejandro
    Stefan Andreiana, Doru
    Ojeda Roldan, Alvaro
    Hernandez Bustos, Alfonso
    Acevedo Galicia, Luis Enrique
    APPLIED SCIENCES-BASEL, 2022, 12 (23):
  • [2] Dynamic Control of a Fiber Manufacturing Process Using Deep Reinforcement Learning
    Kim, Sangwoon
    Kim, David Donghyun
    Anthony, Brian W.
    IEEE-ASME TRANSACTIONS ON MECHATRONICS, 2022, 27 (02) : 1128 - 1137
  • [3] Self-Adaptive Traffic Control Model With Behavior Trees and Reinforcement Learning for AGV in Industry 4.0
    Hu, Hao
    Jia, Xiaoliang
    Liu, Kuo
    Sun, Bingyang
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2021, 17 (12) : 7968 - 7979
  • [4] Automatic assembly cost control method of Industry 4.0 production line based on deep reinforcement learning
    Zhou H.
    International Journal of Manufacturing Technology and Management, 2022, 36 (5-6) : 352 - 367
  • [5] Reinforcement Learning for Autonomous Process Control in Industry 4.0: Advantages and Challenges
    Nievas, Nuria
    Pages-Bernaus, Adela
    Bonada, Francesc
    Echeverria, Lluis
    Domingo, Xavier
    APPLIED ARTIFICIAL INTELLIGENCE, 2024, 38 (01)
  • [6] Optimization Planning Scheduling Problem in Industry 4.0 Using Deep Reinforcement Learning
    Terol, Marcos
    Gomez-Gasquet, Pedro
    Boza, Andres
    IOT AND DATA SCIENCE IN ENGINEERING MANAGEMENT, 2023, 160 : 136 - 140
  • [7] Enabling adaptable Industry 4.0 automation with a modular deep reinforcement learning framework
    Raziei, Zohreh
    Moghaddam, Mohsen
    IFAC PAPERSONLINE, 2021, 54 (01): : 546 - 551
  • [8] Dynamic resource matching in manufacturing using deep reinforcement learning
    Panda, Saunak Kumar
    Xiang, Yisha
    Liu, Ruiqi
    EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2024, 318 (02) : 408 - 423
  • [9] Dynamic Adaptive Streaming Control based on Deep Reinforcement Learning in Named Data Networking
    Qiu, Shengyan
    Tan, Xiaobin
    Zhu, Jin
    2018 37TH CHINESE CONTROL CONFERENCE (CCC), 2018, : 9478 - 9482
  • [10] Adaptive Tuning of Dynamic Matrix Control for Uncertain Industrial Systems With Deep Reinforcement Learning
    Zhang, Yang
    Wang, Peng
    Yu, Liying
    Li, Ning
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2024,