Deep reinforcement learning driven trajectory-based meta-heuristic for distributed heterogeneous flexible job shop scheduling problem

被引:1
|
作者
Zhang, Qichen [1 ]
Shao, Weishi [1 ,3 ,4 ]
Shao, Zhongshi [2 ]
Pi, Dechang [4 ]
Gao, Jiaquan [1 ,3 ]
机构
[1] Nanjing Normal Univ, Sch Comp & Elect Informat, Sch Artificial Intelligence, Nanjing, Peoples R China
[2] Shaanxi Normal Univ, Sch Comp Sci, Xian, Peoples R China
[3] Minist Educ, Key Lab Numer Simulat Large Scale Complex Syst, Beijing, Peoples R China
[4] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing, Peoples R China
基金
中国博士后科学基金;
关键词
Distributed heterogeneous flexible job shop; scheduling problem; Deep Q network; Variable neighborhood search; Makespan; Critical path; ALGORITHM; SEARCH;
D O I
10.1016/j.swevo.2024.101753
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As the production environment evolves, distributed manufacturing exhibits heterogeneous characteristics, including diverse machines, workers, and production processes. This paper examines a distributed heterogeneous flexible job shop scheduling problem (DHFJSP) with varying processing times. A mixed integer linear programming (MILP) model of the DHFJSP is formulated with the objective of minimizing the makespan. To solve the DHFJSP, we propose a deep Q network-aided automatic design of a variable neighborhood search algorithm (DQN-VNS). By analyzing schedules, sixty-one types of scheduling features are extracted. These features, along with six shaking strategies, are used as states and actions. A DHFJSP environment simulator is developed to train the deep Q network. The well-trained DQN then generates the shaking procedure for VNS. Additionally, a greedy initialization method is proposed to enhance the quality of the initial solution. Seven efficient critical path-based neighborhood structures with three-vector encoding scheme are introduced to improve local search. Numerical experiments on various scales of instances validate the effectiveness of the MILP model and the DQN-VNS algorithm. The results show that the DQN-VNS algorithm achieves an average relative percentage deviation (ARPD) of 3.2%, which represents an approximately 88.45% reduction compared to the best-performing algorithm among the six compared, with an ARPD of 27.7%. This significant reduction in ARPD highlights the superior stability and performance of the proposed DQN-VNS algorithm.
引用
收藏
页数:23
相关论文
共 50 条
  • [41] Dynamic scheduling for flexible job shop with new job insertions by deep reinforcement learning
    Luo, Shu
    APPLIED SOFT COMPUTING, 2020, 91
  • [42] Deep Reinforcement Learning for Distributed Flow Shop Scheduling with Flexible Maintenance
    Yan, Qi
    Wu, Wenbin
    Wang, Hongfeng
    MACHINES, 2022, 10 (03)
  • [43] An Effective Meta-Heuristic Algorithm to Minimize Makespan in Job Shop Scheduling
    Nazif, Habibeh
    INDUSTRIAL ENGINEERING AND MANAGEMENT SYSTEMS, 2019, 18 (03): : 360 - 368
  • [44] Scheduling for the Flexible Job-Shop Problem with a Dynamic Number of Machines Using Deep Reinforcement Learning
    Chang, Yu-Hung
    Liu, Chien-Hung
    You, Shingchern D.
    INFORMATION, 2024, 15 (02)
  • [45] A discrete event simulator to implement deep reinforcement learning for the dynamic flexible job shop scheduling problem
    Tiacci, Lorenzo
    Rossi, Andrea
    SIMULATION MODELLING PRACTICE AND THEORY, 2024, 134
  • [46] A multi-action deep reinforcement learning framework for flexible Job-shop scheduling problem
    Lei, Kun
    Guo, Peng
    Zhao, Wenchao
    Wang, Yi
    Qian, Linmao
    Meng, Xiangyin
    Tang, Liansheng
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 205
  • [47] Deep reinforcement learning for dynamic flexible job shop scheduling problem considering variable processing times
    Zhang, Lu
    Feng, Yi
    Xiao, Qinge
    Xu, Yunlang
    Li, Di
    Yang, Dongsheng
    Yang, Zhile
    JOURNAL OF MANUFACTURING SYSTEMS, 2023, 71 : 257 - 273
  • [48] Dynamic scheduling for flexible job shop using a deep reinforcement learning approach
    Gui, Yong
    Tang, Dunbing
    Zhu, Haihua
    Zhang, Yi
    Zhang, Zequn
    COMPUTERS & INDUSTRIAL ENGINEERING, 2023, 180
  • [49] Solving flexible job shop scheduling problems via deep reinforcement learning
    Yuan, Erdong
    Wang, Liejun
    Cheng, Shuli
    Song, Shiji
    Fan, Wei
    Li, Yongming
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 245
  • [50] A Q-Learning-Based Hyper-Heuristic Evolutionary Algorithm for the Distributed Flexible Job-Shop Scheduling Problem
    Wu, Fang-Chun
    Qian, Bin
    Hu, Rong
    Zhang, Zi-Qi
    Wang, Bin
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, ICIC 2023, PT I, 2023, 14086 : 251 - 261