共 50 条
A deep reinforcement learning model for dynamic job-shop scheduling problem with uncertain processing time
被引:5
|作者:
Wu, Xinquan
[1
]
Yan, Xuefeng
[1
,2
]
Guan, Donghai
[1
,2
]
Wei, Mingqiang
[1
,2
]
机构:
[1] Nanjing Univ Aeronaut & astronaut, Coll Comp Sci & Technol, Nanjing 210016, Peoples R China
[2] Collaborat Innovat Ctr Novel Software Technol & In, Nanjing 210016, Peoples R China
关键词:
Dynamic job shop scheduling problem;
Deep reinforcement learning;
Prioritized experience replay;
PPO;
Uncertainty;
FEATURE-SELECTION;
ALGORITHM;
OPTIMIZATION;
HEURISTICS;
SEARCH;
RULES;
D O I:
10.1016/j.engappai.2023.107790
中图分类号:
TP [自动化技术、计算机技术];
学科分类号:
0812 ;
摘要:
The dynamic job -shop scheduling problem (DJSP) is a type of scheduling tasks where rescheduling is performed when encountering the uncertainties such as the uncertain operation processing time. However, the current deep reinforcement learning (DRL) scheduling approaches are hard to train convergent scheduling policies as the problem scale increases, which is very important for rescheduling under uncertainty. In this paper, we propose a DRL scheduling method for DJSP based on the proximal policy optimization (PPO) with hybrid prioritized experience replay. The job shop scheduling problem is formulated as a sequential decision -making problem based on Markov Decision Process (MDP) where a novel state representation is designed based on the feasible solution matrix which depicts the scheduling order of a scheduling task, a set of paired priority dispatching rules (PDR) are used as the actions and a new intuitive reward function is established based on the machine idle time. Moreover, a new hybrid prioritized experience replay method for PPO is proposed to reduce the training time where samples with positive temporal -difference (TD) error are replayed. Static experiments on classic benchmark instances show that the make -span obtained by our scheduling agent has been reduced by 1.59% on average than the best known DRL methods. In addition, dynamic experiments demonstrate that the training time of the reused scheduling policy is reduced by 27% compared with the retrained policy when encountering uncertainties such as uncertain operation processing time.
引用
收藏
页数:14
相关论文