Deep reinforcement learning based low energy consumption scheduling approach design for urban electric logistics vehicle networks

被引:0
|
作者
Sun, Pengfei [1 ,2 ,3 ]
He, Jingbo [1 ]
Wan, Jianxiong [1 ,2 ]
Guan, Yuxin [1 ,2 ]
Liu, Dongjiang [1 ,2 ]
Su, Xiaoming [1 ,2 ]
Li, Leixiao [1 ,2 ]
机构
[1] Inner Mongolia Univ Technol, Coll Data Sci & Applicat, Hohhot 010080, Peoples R China
[2] Inner Mongolia Univ Technol, Inner Mongolia Key Lab Beijiang Cyberspace Secur, Hohhot 010080, Peoples R China
[3] Inner Mongolia Univ, Coll Comp Sci, Hohhot 010021, Peoples R China
来源
SCIENTIFIC REPORTS | 2025年 / 15卷 / 01期
基金
中国国家自然科学基金;
关键词
Urban electric logistics vehicle networks; Low energy consumption scheduling; Heterogeneous attention model; Deep reinforcement learning; OPTIMIZATION; SEARCH;
D O I
10.1038/s41598-025-92916-7
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
The rapid increase in carbon emissions from the logistics transportation industry has underscored the urgent need for low-carbon logistics solutions. Electric logistics vehicles (ELVs) are increasingly being considered as replacements for traditional fuel-powered vehicles to reduce emissions in urban logistics. However, ELVs are typically limited by their battery capacity and load constraints. Additionally, effective scheduling of charging and the management of transportation duration are critical factors that must be addressed. This paper addresses low energy consumption scheduling (LECS) problem, which aims to minimize the total energy consumption of heterogeneous ELVs with varying load and battery capacities, considering the availability of multiple charging stations (CSs). Given that the complexity of LECS problem, this study proposes a heterogeneous attention model based on encoder-decoder architecture (HAMEDA) approach, which employs a heterogeneous graph attention network and introduces a novel decoding procedure to enhance solution quality and learning efficiency during the encoding and decoding phases. Trained via deep reinforcement learning (DRL) in an unsupervised manner, HAMEDA is adept at autonomously deriving optimal transportation routes for each ELV from specific cases presented. Comprehensive simulations have verified that HAMEDA can diminish overall energy utilization by no less than 1.64% compared with other traditional heuristic or learning-based algorithms. Additionally, HAMEDA excels in maintaining an advantageous equilibrium between execution speed and the quality of solutions, rendering it exceptionally apt for expansive tasks that necessitate swift decision-making processes.
引用
收藏
页数:18
相关论文
共 50 条
  • [41] A comparative study of 13 deep reinforcement learning based energy management methods for a hybrid electric vehicle
    Wang, Hanchen
    Ye, Yiming
    Zhang, Jiangfeng
    Xu, Bin
    ENERGY, 2023, 266
  • [42] Deep Reinforcement Learning Based Task Scheduling in Edge Computing Networks
    Qi, Fan
    Li Zhuo
    Chen Xin
    2020 IEEE/CIC INTERNATIONAL CONFERENCE ON COMMUNICATIONS IN CHINA (ICCC), 2020, : 835 - 840
  • [43] Deep Reinforcement Learning-based Scheduling for Roadside Communication Networks
    Atallah, Rihal
    Assi, Chadi
    Khahhaz, Maurice
    2017 15TH INTERNATIONAL SYMPOSIUM ON MODELING AND OPTIMIZATION IN MOBILE, AD HOC, AND WIRELESS NETWORKS (WIOPT), 2017,
  • [44] Energy and Balancing Services Provision by Electric Vehicles with Vehicle-to- Grid Capability: A Deep Reinforcement Learning Approach
    Blatiak, Alicia
    Qiu, Dawei
    Strbac, Goran
    Papadaskalopoulos, Dimitrios
    2024 INTERNATIONAL CONFERENCE ON SMART ENERGY SYSTEMS AND TECHNOLOGIES, SEST 2024, 2024,
  • [45] Intelligent Driving Task Scheduling Service in Vehicle-Edge Collaborative Networks Based on Deep Reinforcement Learning
    Wang, Nuanlai
    Pang, Shanchen
    Ji, Xiaofeng
    Wang, Min
    Qiao, Sibo
    Yu, Shihang
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2024, 21 (04): : 4357 - 4368
  • [46] Optimal scheduling for charging and discharging of electric vehicles based on deep reinforcement learning
    An, Dou
    Cui, Feifei
    Kang, Xun
    FRONTIERS IN ENERGY RESEARCH, 2023, 11
  • [47] A deep reinforcement learning based charging and discharging scheduling strategy for electric vehicles
    Xiao, Qin
    Zhang, Runtao
    Wang, Yongcan
    Shi, Peng
    Wang, Xi
    Chen, Baorui
    Fan, Chengwei
    Chen, Gang
    ENERGY REPORTS, 2024, 12 : 4854 - 4863
  • [48] Dynamic energy scheduling and routing of multiple electric vehicles using deep reinforcement learning
    Alqahtani, Mohammed
    Hu, Mengqi
    ENERGY, 2022, 244
  • [49] Energy Storage Scheduling Optimization Strategy Based on Deep Reinforcement Learning
    Hou, Shixi
    Han, Jienan
    Liu, Xiangjiang
    Guo, Ruoshan
    Chu, Yundi
    ADVANCES IN NEURAL NETWORKS-ISNN 2024, 2024, 14827 : 33 - 44
  • [50] Energy-efficient VM scheduling based on deep reinforcement learning
    Wang, Bin
    Liu, Fagui
    Lin, Weiwei
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2021, 125 : 616 - 628