Leveraging Transfer Learning in Deep Reinforcement Learning for Solving Combinatorial Optimization Problems Under Uncertainty

被引:0
|
作者
Ezzahra Achamrah, Fatima [1 ]
机构
[1] Univ Sheffield, Sheffield Univ Management Sch, Sheffield S10 1FL, England
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Optimization; Deep reinforcement learning; Uncertainty; Transfer learning; Heuristic algorithms; Adaptation models; Stochastic processes; Computational modeling; Vehicle dynamics; Routing; Combinatorial optimization problems; uncertainty; deep reinforcement learning; transfer learning; vehicle routing problem; VEHICLE-ROUTING PROBLEM; NEURAL-NETWORKS; TIME WINDOWS; ALGORITHM; DELIVERY; PICKUP;
D O I
10.1109/ACCESS.2024.3505678
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, addressing the inherent uncertainties within Combinatorial Optimization Problems (COPs) reveals the limitations of traditional optimization methods. Although these methods are often effective in deterministic settings, they may lack flexibility and adaptability to navigate the uncertain nature of real-world COP/s. Deep Reinforcement Learning (DRL) has emerged as a promising approach for dynamic decision-making within these complex environments. Yet, the application of DRL in solving COP/s highlights key limitations for the generalization process across various problem instances without extensive retraining and customization for each new variant, leading to notable computational costs and inefficiencies. To address these challenges, this paper introduces a novel framework that combines the adaptability and learning capabilities of DRL with the efficiency of Transfer Learning (TL) and Neural Architecture Search. This framework enables the leveraging of knowledge gained from solving COP/s to enhance the solving of different but related COP/s, thereby eliminating the necessity for retraining models from scratch for each new problem variant to be solved. The framework was evaluated on over 1,500 benchmark instances across 10 stochastic and deterministic variants of the vehicle routing problem. Across extensive experiments, the approach consistently improves solution quality and computational efficiency. On average, it achieves at least a 5% improvement in solution quality and a 20% reduction in CPU time compared to state-of-the-art methods, with some variants showing even more substantial gains. For large-scale instances over 200 customers, the TL process requires only 10-15% of the time needed to train models from scratch, while maintaining solution quality, laying the groundwork for future research in this area.
引用
收藏
页码:181477 / 181497
页数:21
相关论文
共 50 条
  • [1] Transfer Reinforcement Learning for Combinatorial Optimization Problems
    Souza, Gleice Kelly Barbosa
    Santos, Samara Oliveira Silva
    Ottoni, Andre Luiz Carvalho
    Oliveira, Marcos Santos
    Oliveira, Daniela Carine Ramires
    Nepomuceno, Erivelton Geraldo
    ALGORITHMS, 2024, 17 (02)
  • [2] A general deep reinforcement learning hyperheuristic framework for solving combinatorial optimization problems
    Kallestad, Jakob
    Hasibi, Ramin
    Hemmati, Ahmad
    Soerensen, Kenneth
    EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2023, 309 (01) : 446 - 468
  • [3] Deep Reinforcement Learning for Combinatorial Optimization: Covering Salesman Problems
    Li, Kaiwen
    Zhang, Tao
    Wang, Rui
    Wang, Yuheng
    Han, Yi
    Wang, Ling
    IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (12) : 13142 - 13155
  • [4] Solving combinatorial optimization problems over graphs with BERT-Based Deep Reinforcement Learning
    Wang, Qi
    Lai, Kenneth H.
    Tang, Chunlei
    INFORMATION SCIENCES, 2023, 619 : 930 - 946
  • [5] An approach to solving combinatorial optimization problems using a population of reinforcement learning agents
    Miagkikh, VV
    Punch, WF
    GECCO-99: PROCEEDINGS OF THE GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, 1999, : 1358 - 1365
  • [6] Deep Reinforcement Learning for Exact Combinatorial Optimization: Learning to Branch
    Zhang, Tianyu
    Banitalebi-Dehkordi, Amin
    Zhang, Yong
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 3105 - 3111
  • [7] Deep reinforcement learning for radiative heat transfer optimization problems
    Ortiz-Mansilla, E.
    Garcia-Esteban, J. J.
    Bravo-Abad, J.
    Cuevas, J. C.
    PHYSICAL REVIEW APPLIED, 2024, 22 (05):
  • [8] Solving Continual Combinatorial Selection via Deep Reinforcement Learning
    Song, Hyungseok
    Jang, Hyeryung
    Tran, Hai H.
    Yoon, Se-eun
    Son, Kyunghwan
    Yun, Donggyu
    Chung, Hyoju
    Yi, Yung
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 3467 - 3474
  • [9] Deep reinforcement learning with credit assignment for combinatorial optimization
    Yan, Dong
    Weng, Jiayi
    Huang, Shiyu
    Li, Chongxuan
    Zhou, Yichi
    Su, Hang
    Zhu, Jun
    PATTERN RECOGNITION, 2022, 124
  • [10] A REINFORCEMENT LEARNING BASED FRAMEWORK FOR SOLVING OPTIMIZATION PROBLEMS
    Czibula, Istvan-Gergely
    Czibula, Gabriela
    Bocicor, Maria-Iuliana
    KEPT 2011: KNOWLEDGE ENGINEERING PRINCIPLES AND TECHNIQUES, 2011, : 235 - 246