Online Multimodal Transportation Planning using Deep Reinforcement Learning

被引:6
|
作者
Farahani, Amirreza [1 ]
Genga, Laura [1 ]
Dijkman, Remco [1 ]
机构
[1] Eindhoven Univ Technol, Dept Ind Engn, NL-5612 AZ Eindhoven, Netherlands
关键词
MODEL; NETWORK; DESIGN;
D O I
10.1109/SMC52423.2021.9658943
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper we propose a Deep Reinforcement Learning approach to solve a multimodal transportation planning problem, in which containers must be assigned to a truck or to trains that will transport them to their destination. While traditional planning methods work "offline" (i.e., they take decisions for a batch of containers before the transportation starts), the proposed approach is "online", in that it can take decisions for individual containers, while transportation is being executed. Planning transportation online helps to effectively respond to unforeseen events that may affect the original transportation plan, thus supporting companies in lowering transportation costs. We implemented different container selection heuristics within the proposed Deep Reinforcement Learning algorithm and we evaluated its performance for each heuristic using data that simulate a realistic scenario, designed on the basis of a real case study at a logistics company. The experimental results revealed that the proposed method was able to learn effective patterns of container assignment. It outperformed tested competitors in terms of total transportation costs and utilization of train capacity by 20.48% to 55.32% for the cost and by 7.51% to 20.54% for the capacity. Furthermore, it obtained results within 2.7% for the cost and 0.72% for the capacity of the optimal solution generated by an Integer Linear Programming solver in an offline setting.
引用
收藏
页码:1691 / 1698
页数:8
相关论文
共 50 条
  • [1] Tackling Uncertainty in Online Multimodal Transportation Planning Using Deep Reinforcement Learning
    Farahani, Amirreza
    Genga, Laura
    Dijkman, Remco
    [J]. COMPUTATIONAL LOGISTICS (ICCL 2021), 2021, 13004 : 578 - 593
  • [2] A maintenance planning framework using online and offline deep reinforcement learning
    Bukhsh, Zaharah A.
    Molegraaf, Hajo
    Jansen, Nils
    [J]. NEURAL COMPUTING & APPLICATIONS, 2023,
  • [3] Deep reinforcement learning of passenger behavior in multimodal journey planning with proportional fairness
    Chu, Kai-Fung
    Guo, Weisi
    [J]. NEURAL COMPUTING & APPLICATIONS, 2023, 35 (27): : 20221 - 20240
  • [4] Robot navigation in a crowd by integrating deep reinforcement learning and online planning
    Zhou, Zhiqian
    Zhu, Pengming
    Zeng, Zhiwen
    Xiao, Junhao
    Lu, Huimin
    Zhou, Zongtan
    [J]. APPLIED INTELLIGENCE, 2022, 52 (13) : 15600 - 15616
  • [5] Deep reinforcement learning of passenger behavior in multimodal journey planning with proportional fairness
    Kai-Fung Chu
    Weisi Guo
    [J]. Neural Computing and Applications, 2023, 35 : 20221 - 20240
  • [6] Robot navigation in a crowd by integrating deep reinforcement learning and online planning
    Zhiqian Zhou
    Pengming Zhu
    Zhiwen Zeng
    Junhao Xiao
    Huimin Lu
    Zongtan Zhou
    [J]. Applied Intelligence, 2022, 52 : 15600 - 15616
  • [7] UAV online path planning technology based on deep reinforcement learning
    Fan, Jiaxuan
    Wang, Zhenya
    Ren, Jinlei
    Lu, Ying
    Liu, Yiheng
    [J]. 2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 5382 - 5386
  • [8] Multimodal Biometrics Fusion Algorithm Using Deep Reinforcement Learning
    Huang, Quan
    [J]. MATHEMATICAL PROBLEMS IN ENGINEERING, 2022, 2022
  • [9] Deep Reinforcement Learning with Applications in Transportation
    Qin, Zhiwei
    Tang, Jian
    Ye, Jieping
    [J]. KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 3201 - 3202
  • [10] Advanced planning for autonomous vehicles using reinforcement learning and deep inverse reinforcement learning
    You, Changxi
    Lu, Jianbo
    Filev, Dimitar
    Tsiotras, Panagiotis
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2019, 114 : 1 - 18