Hierarchical reinforcement learning for transportation infrastructure maintenance planning

被引:5
|
作者
Hamida, Zachary [1 ]
Goulet, James-A. [1 ]
机构
[1] Polytech Montreal, Dept Civil Geol & Min Engn, 2500 Chem Polytech, Montreal, PQ H3T 1J4, Canada
关键词
Maintenance planning; Reinforcement learning; RL environment; Deep Q-learning; Infrastructure deterioration; State-space models;
D O I
10.1016/j.ress.2023.109214
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Maintenance planning on bridges commonly faces multiple challenges, mainly related to complexity and scale. Those challenges stem from the large number of structural elements in each bridge in addition to the uncertainties surrounding their health condition, which is monitored using visual inspections at the element -level. Recent developments have relied on deep reinforcement learning (RL) for solving maintenance planning problems, with the aim to minimize the long-term costs. Nonetheless, existing RL based solutions have adopted approaches that often lacked the capacity to scale due to the inherently large state and action spaces. The aim of this paper is to introduce a hierarchical RL formulation for maintenance planning, which naturally adapts to the hierarchy of information and decisions in infrastructure. The hierarchical formulation enables decomposing large state and action spaces into smaller ones, by relying on state and temporal abstraction. An additional contribution from this paper is the development of an open-source RL environment that uses state-space models (SSM) to describe the propagation of the deterioration condition and speed over time. The functionality of this new environment is demonstrated by solving maintenance planning problems at the element-level, and the bridge-level.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] Reinforcement Learning Based Trajectory Planning for Multi-UAV Load Transportation
    Estevez, Julian
    Manuel Lopez-Guede, Jose
    Del Valle-Echavarri, Javier
    Grana, Manuel
    IEEE Access, 2024, 12 : 144009 - 144016
  • [42] Hierarchical Reinforcement Learning for Autonomous Decision Making and Motion Planning of Intelligent Vehicles
    Lu, Yang
    Xu, Xin
    Zhang, Xinglong
    Qian, Lilin
    Zhou, Xing
    IEEE ACCESS, 2020, 8 : 209776 - 209789
  • [43] Urban transportation: innovations in infrastructure planning and development
    Narayanaswami, Sundaravalli
    INTERNATIONAL JOURNAL OF LOGISTICS MANAGEMENT, 2017, 28 (01) : 150 - 171
  • [44] Hierarchical Evasive Path Planning Using Reinforcement Learning and Model Predictive Control
    Feher, Arpad
    Aradi, Szilard
    Becsi, Tamas
    IEEE ACCESS, 2020, 8 : 187470 - 187482
  • [45] Transportation Infrastructure Planning, Management, and Finance INTRODUCTION
    Doll, Claus
    Durango-Cohen, Pablo L.
    Ueda, Takayuki
    JOURNAL OF INFRASTRUCTURE SYSTEMS, 2009, 15 (04) : 261 - 262
  • [46] A Safe Hierarchical Planning Framework for Complex Driving Scenarios based on Reinforcement Learning
    Li, Jinning
    Sun, Liting
    Chen, Jianyu
    Tomizuka, Masayoshi
    Zhan, Wei
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 2660 - 2666
  • [47] A deep reinforcement learning assisted simulated annealing algorithm for a maintenance planning problem
    Kosanoglu, Fuat
    Atmis, Mahir
    Turan, Hasan Huseyin
    ANNALS OF OPERATIONS RESEARCH, 2024, 339 (1-2) : 79 - 110
  • [48] Reinforcement and deep reinforcement learning-based solutions for machine maintenance planning, scheduling policies, and optimization
    Ogunfowora, Oluwaseyi
    Najjaran, Homayoun
    JOURNAL OF MANUFACTURING SYSTEMS, 2023, 70 : 244 - 263
  • [49] Concurrent Hierarchical Reinforcement Learning
    Marthi, Bhaskara
    Russell, Stuart
    Latham, David
    Guestrin, Carlos
    19TH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI-05), 2005, : 779 - 785
  • [50] Hierarchical framework for interpretable and specialized deep reinforcement learning-based predictive maintenance
    Abbas, Ammar N.
    Chasparis, Georgios C.
    Kelleher, John D.
    DATA & KNOWLEDGE ENGINEERING, 2024, 149