Factory Simulation of Optimization Techniques Based on Deep Reinforcement Learning for Storage Devices

被引:1
|
作者
Lim, Ju-Bin [1 ,2 ]
Jeong, Jongpil [1 ]
机构
[1] Sungkyunkwan Univ, Dept Smart Factory Convergence, 2066 Seobu Ro, Suwon 16419, Gyeonggi Do, South Korea
[2] LG Innotek, AI Machine Vis Smart Factory Lab, 111 Jinwi2sandan Ro, Pyeongtaek Si 17708, Gyeonggi Do, South Korea
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 17期
基金
新加坡国家研究基金会;
关键词
conceptualization; methodology; job allocation; reinforcement learning; stocker; digital twin; simulation; Industry; 4.0;
D O I
10.3390/app13179690
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
In this study, reinforcement learning (RL) was used in factory simulation to optimize storage devices for use in Industry 4.0 and digital twins. Industry 4.0 is increasing productivity and efficiency in manufacturing through automation, data exchange, and the integration of new technologies. Innovative technologies such as the Internet of Things (IoT), artificial intelligence (AI), and big data analytics are smartly automating manufacturing processes and integrating data with production systems to monitor and analyze production data in real time and optimize factory operations. A digital twin is a digital model of a physical product or process in the real world. It is built on data and real-time information collected through sensors and accurately simulates the behavior and performance of a real-world manufacturing floor. With a digital twin, one can leverage data at every stage of product design, development, manufacturing, and maintenance to predict, solve, and optimize problems. First, we defined an RL environment, modeled it, and validated its ability to simulate a real physical system. Subsequently, we introduced a method to calculate reward signals and apply them to the environment to ensure the alignment of the behavior of the RL agent with the task objective. Traditional approaches use simple reward functions to tune the behavior of reinforcement learning agents. These approaches issue rewards according to predefined rules and often use reward signals that are unrelated to the task goal. However, in this study, the reward signal calculation method was modified to consider the task goal and the characteristics of the physical system and calculate more realistic and meaningful rewards. This method reflects the complex interactions and constraints that occur during the optimization process of the storage device and generates more accurate episodes of reinforcement learning in agent behavior. Unlike the traditional simple reward function, this reflects the complexity and realism of the storage optimization task, making the reward more sophisticated and effective.The stocker simulation model was used to validate the effectiveness of RL. The model is a storage device that simulates logistics in a manufacturing production area. The results revealed that RL is a useful tool for automating and optimizing complex logistics systems, increasing the applicability of RL in logistics. We proposed a novel method for creating an agent through learning using the proximal policy optimization algorithm, and the agent was optimized by configuring various learning options. The application of reinforcement learning resulted in an effectiveness of 30-100%, and the methods can be expanded to other fields.
引用
收藏
页数:18
相关论文
共 50 条
  • [21] A SDN Routing Optimization Mechanism Based on Deep Reinforcement Learning
    Lan J.
    Yu C.
    Hu Y.
    Li Z.
    Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2019, 41 (11): : 2669 - 2674
  • [22] A SDN Routing Optimization Mechanism Based on Deep Reinforcement Learning
    Lan Julong
    Yu Changhe
    Hu Yuxiang
    Li Ziyong
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2019, 41 (11) : 2669 - 2674
  • [23] Workshop Facility Layout Optimization Based on Deep Reinforcement Learning
    Zhao, Yanlin
    Duan, Danlu
    PROCESSES, 2024, 12 (01)
  • [24] QoS Routing Optimization Based on Deep Reinforcement Learning in SDN
    Song, Yu
    Qian, Xusheng
    Zhang, Nan
    Wang, Wei
    Xiong, Ao
    CMC-COMPUTERS MATERIALS & CONTINUA, 2024, 79 (02): : 3007 - 3021
  • [25] Optimization of Air Network Resources Based on Deep Reinforcement Learning
    Zhang, Yuanwei
    Cui, Haixia
    2024 4TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND INTELLIGENT SYSTEMS ENGINEERING, MLISE 2024, 2024, : 145 - 151
  • [26] Network routing optimization approach based on deep reinforcement learning
    Meng L.
    Guo B.
    Yang W.
    Zhang X.
    Zhao Z.
    Huang S.
    Xi Tong Gong Cheng Yu Dian Zi Ji Shu/Systems Engineering and Electronics, 2022, 44 (07): : 2311 - 2318
  • [27] Global Routing Optimization Analysis based on Deep Reinforcement Learning
    Chen, Ying-Tzu
    Cheng, Wei-Kai
    2024 IEEE THE 20TH ASIA PACIFIC CONFERENCE ON CIRCUITS AND SYSTEMS, APCCAS 2024, 2024, : 60 - 63
  • [28] Hybrid group formation simulation based on deep reinforcement learning
    Salehi N.
    Mohammadi H.M.
    Sung M.
    International Journal of Intelligent Systems Technologies and Applications, 2024, 22 (02) : 151 - 172
  • [29] Container Caching Optimization based on Explainable Deep Reinforcement Learning
    Jayaram, Divyashree
    Jeelani, Saad
    Ishigaki, Genya
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 7127 - 7132
  • [30] Deep Reinforcement Learning based dynamic optimization of bus timetable
    Ai, Guanqun
    Zuo, Xingquan
    Chen, Gang
    Wu, Binglin
    APPLIED SOFT COMPUTING, 2022, 131