A stochastic track maintenance scheduling model based on deep reinforcement learning approaches

被引:1
|
作者
Lee, Jun S. [1 ]
Yeo, In-Ho [1 ]
Bae, Younghoon [1 ]
机构
[1] Korea Railroad Res Inst, Uiwang Si, South Korea
关键词
Railway maintenance; Stochastic deterioration model; Deep reinforcement learning; Optimal scheduling; RAILWAY; OPTIMIZATION;
D O I
10.1016/j.ress.2023.109709
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
A data-driven railway track maintenance scheduling framework based on a stochastic track deterioration model and deep reinforcement learning approaches is proposed. Various track conditions such as track geometry and the support capacity of the infrastructure are considered in estimating the track deterioration rate and the track quality index obtained thereby is used to predict the state of each track segment. Further, the framework incorporates additional field-specific constraints including the number of tampings and the latest maintenance time of ballasted track are also introduced to account for the field conditions as accurately as possible. From these conditions, the optimal maintenance action for each track segment is determined based on the combined constraints of cost and ride comfort. In the present study, two reinforcement learning (RL) models, namely the Duel Deep Q Network (DuDQN) and Asynchronous Advantage Actor Critic (A3C) models, were employed to establish a decision support system of track maintenance, and the models' advantages and disadvantages were compared. Field application of the models was conducted based on field maintenance data, and the DuDQN model was found to be more suitable in our case. The optimal number of tampings before renewal was determined from the maintenance costs and field conditions, and the cost effect of ride comfort was investigated using the proposed deep RL model. Finally, possible improvements to the models were explored and are briefly outlined herein.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] Predictive Maintenance Model for IIoT-Based Manufacturing: A Transferable Deep Reinforcement Learning Approach
    Ong, Kevin Shen Hoong
    Wang, Wenbo
    Hieu, Nguyen Quang
    Niyato, Dusit
    Friedrichs, Thomas
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (17) : 15725 - 15741
  • [32] Revising the Observation Satellite Scheduling Problem Based on Deep Reinforcement Learning
    Huang, Yixin
    Mu, Zhongcheng
    Wu, Shufan
    Cui, Benjie
    Duan, Yuxiao
    REMOTE SENSING, 2021, 13 (12)
  • [33] Fuzzy job shop scheduling problem based on deep reinforcement learning
    Zhu, Jia-Zheng
    Zhang, Hong-Li
    Wang, Cong
    Li, Xin-Kai
    Dong, Ying-Chao
    Kongzhi yu Juece/Control and Decision, 2024, 39 (02): : 595 - 603
  • [34] Scheduling of twin automated stacking cranes based on Deep Reinforcement Learning
    Jin, Xin
    Mi, Nan
    Song, Wen
    Li, Qiqiang
    COMPUTERS & INDUSTRIAL ENGINEERING, 2024, 191
  • [35] Optimization of job shop scheduling problem based on deep reinforcement learning
    Qiao, Dongping
    Duan, Lvqi
    Li, Honglei
    Xiao, Yanqiu
    EVOLUTIONARY INTELLIGENCE, 2024, 17 (01) : 371 - 383
  • [36] Optimization of job shop scheduling problem based on deep reinforcement learning
    Dongping Qiao
    Lvqi Duan
    HongLei Li
    Yanqiu Xiao
    Evolutionary Intelligence, 2024, 17 : 371 - 383
  • [37] A deep reinforcement learning-based approach for the residential appliances scheduling
    Li, Sichen
    Cao, Di
    Huang, Qi
    Zhang, Zhenyuan
    Chen, Zhe
    Blaabjerg, Frede
    Hu, Weihao
    ENERGY REPORTS, 2022, 8 : 1034 - 1042
  • [38] Traffic scheduling, network slicing and virtualization based on deep reinforcement learning
    Kumar, Priyan Malarvizhi
    Basheer, Shakila
    Rawal, Bharat S.
    Afghah, Fatemeh
    Babu, Gokulnath Chandra
    Arunmozhi, Manimuthu
    COMPUTERS & ELECTRICAL ENGINEERING, 2022, 100
  • [39] Task scheduling based on deep reinforcement learning in a cloud manufacturing environment
    Dong, Tingting
    Xue, Fei
    Xiao, Chuangbai
    Li, Juntao
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2020, 32 (11):
  • [40] Energy Storage Scheduling Optimization Strategy Based on Deep Reinforcement Learning
    Hou, Shixi
    Han, Jienan
    Liu, Xiangjiang
    Guo, Ruoshan
    Chu, Yundi
    ADVANCES IN NEURAL NETWORKS-ISNN 2024, 2024, 14827 : 33 - 44