Solving the Vehicle Routing Problem with Stochastic Travel Cost Using Deep Reinforcement Learning

被引:0
|
作者
Cai, Hao [1 ]
Xu, Peng [1 ]
Tang, Xifeng [1 ]
Lin, Gan [1 ]
机构
[1] Hohai Univ, Coll Civil & Transportat Engn, Xikang Rd, Nanjing 210024, Peoples R China
关键词
VRP-STC; graph attention networks; multi-head attention mechanism; deep reinforcement learning; GO; SHOGI; CHESS; GAME;
D O I
10.3390/electronics13163242
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The Vehicle Routing Problem (VRP) is a classic combinatorial optimization problem commonly encountered in the fields of transportation and logistics. This paper focuses on a variant of the VRP, namely the Vehicle Routing Problem with Stochastic Travel Cost (VRP-STC). In VRP-STC, the introduction of stochastic travel costs increases the complexity of the problem, rendering traditional algorithms unsuitable for solving it. In this paper, the GAT-AM model combining Graph Attention Networks (GAT) and multi-head Attention Mechanism (AM) is employed. The GAT-AM model uses an encoder-decoder architecture and employs a deep reinforcement learning algorithm. The GAT in the encoder learns feature representations of nodes in different subspaces, while the decoder uses multi-head AM to construct policies through both greedy and sampling decoding methods. This increases solution diversity, thereby finding high-quality solutions. The REINFORCE with Rollout Baseline algorithm is used to train the learnable parameters within the neural network. Test results show that the advantages of GAT-AM become greater as problem complexity increases, with the optimal solution generally unattainable through traditional algorithms within an acceptable timeframe.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] Reinforcement Learning for Solving Stochastic Vehicle Routing Problem
    Iklassov, Zangir
    Sobirov, Ikboljon
    Solozabal, Ruben
    Takac, Martin
    [J]. ASIAN CONFERENCE ON MACHINE LEARNING, VOL 222, 2023, 222
  • [2] Deep Reinforcement Learning for Solving the Heterogeneous Capacitated Vehicle Routing Problem
    Li, Jingwen
    Ma, Yining
    Gao, Ruize
    Cao, Zhiguang
    Lim, Andrew
    Song, Wen
    Zhang, Jie
    [J]. IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (12) : 13572 - 13585
  • [3] Reinforcement Learning for Solving the Vehicle Routing Problem
    Nazari, Mohammadreza
    Oroojlooy, Afshin
    Takac, Martin
    Snyder, Lawrence V.
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [4] Deep Reinforcement Learning for Solving AGVs Routing Problem
    Lu, Chengxuan
    Long, Jinjun
    Xing, Zichao
    Wu, Weimin
    Gu, Yong
    Luo, Jiliang
    Huang, Yisheng
    [J]. VERIFICATION AND EVALUATION OF COMPUTER AND COMMUNICATION SYSTEMS, VECOS 2020, 2020, 12519 : 222 - 236
  • [5] Deep Reinforcement Learning for Solving Vehicle Routing Problems With Backhauls
    Wang, Conghui
    Cao, Zhiguang
    Wu, Yaoxin
    Teng, Long
    Wu, Guohua
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, : 1 - 15
  • [6] Deep reinforcement learning for the dynamic and uncertain vehicle routing problem
    Pan, Weixu
    Liu, Shi Qiang
    [J]. APPLIED INTELLIGENCE, 2023, 53 (01) : 405 - 422
  • [7] Deep reinforcement learning for the dynamic and uncertain vehicle routing problem
    Weixu Pan
    Shi Qiang Liu
    [J]. Applied Intelligence, 2023, 53 : 405 - 422
  • [8] RL SolVeR Pro: Reinforcement Learning for Solving Vehicle Routing Problem
    Kalakanti, Arun Kumar
    Verma, Shivani
    Paul, Topon
    Yoshida, Takufumi
    [J]. 2019 1ST INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND DATA SCIENCES (AIDAS2019), 2019, : 94 - 99
  • [9] Reinforcement Learning for Solving Multiple Vehicle Routing Problem with Time Window
    Zong, Zefang
    Tong, Xia
    Zheng, Meng
    Li, Yong
    [J]. ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2024, 15 (02)
  • [10] Reinforcement Learning Approach to Stochastic Vehicle Routing Problem With Correlated Demands
    Iklassov, Zangir
    Sobirov, Ikboljon
    Solozabal, Ruben
    Takac, Martin
    [J]. IEEE ACCESS, 2023, 11 : 87958 - 87969