Deep reinforcement learning based low energy consumption scheduling approach design for urban electric logistics vehicle networks

被引:0
|
作者
Sun, Pengfei [1 ,2 ,3 ]
He, Jingbo [1 ]
Wan, Jianxiong [1 ,2 ]
Guan, Yuxin [1 ,2 ]
Liu, Dongjiang [1 ,2 ]
Su, Xiaoming [1 ,2 ]
Li, Leixiao [1 ,2 ]
机构
[1] Inner Mongolia Univ Technol, Coll Data Sci & Applicat, Hohhot 010080, Peoples R China
[2] Inner Mongolia Univ Technol, Inner Mongolia Key Lab Beijiang Cyberspace Secur, Hohhot 010080, Peoples R China
[3] Inner Mongolia Univ, Coll Comp Sci, Hohhot 010021, Peoples R China
来源
SCIENTIFIC REPORTS | 2025年 / 15卷 / 01期
基金
中国国家自然科学基金;
关键词
Urban electric logistics vehicle networks; Low energy consumption scheduling; Heterogeneous attention model; Deep reinforcement learning; OPTIMIZATION; SEARCH;
D O I
10.1038/s41598-025-92916-7
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
The rapid increase in carbon emissions from the logistics transportation industry has underscored the urgent need for low-carbon logistics solutions. Electric logistics vehicles (ELVs) are increasingly being considered as replacements for traditional fuel-powered vehicles to reduce emissions in urban logistics. However, ELVs are typically limited by their battery capacity and load constraints. Additionally, effective scheduling of charging and the management of transportation duration are critical factors that must be addressed. This paper addresses low energy consumption scheduling (LECS) problem, which aims to minimize the total energy consumption of heterogeneous ELVs with varying load and battery capacities, considering the availability of multiple charging stations (CSs). Given that the complexity of LECS problem, this study proposes a heterogeneous attention model based on encoder-decoder architecture (HAMEDA) approach, which employs a heterogeneous graph attention network and introduces a novel decoding procedure to enhance solution quality and learning efficiency during the encoding and decoding phases. Trained via deep reinforcement learning (DRL) in an unsupervised manner, HAMEDA is adept at autonomously deriving optimal transportation routes for each ELV from specific cases presented. Comprehensive simulations have verified that HAMEDA can diminish overall energy utilization by no less than 1.64% compared with other traditional heuristic or learning-based algorithms. Additionally, HAMEDA excels in maintaining an advantageous equilibrium between execution speed and the quality of solutions, rendering it exceptionally apt for expansive tasks that necessitate swift decision-making processes.
引用
收藏
页数:18
相关论文
共 50 条
  • [21] A deep reinforcement learning-based charging scheduling approach with augmented Lagrangian for electric vehicles
    Yang, Lun
    Chen, Guibin
    Cao, Xiaoyu
    APPLIED ENERGY, 2025, 378
  • [22] A modal-based approach for estimating electric vehicle energy consumption in transportation networks
    Xu, Xiaodan
    Aziz, H. M. Abdul
    Guensler, Randall
    TRANSPORTATION RESEARCH PART D-TRANSPORT AND ENVIRONMENT, 2019, 75 : 249 - 264
  • [23] Energy management optimization of fuel cell hybrid electric vehicle based on deep reinforcement learning
    Wang, Hao-Cong
    Wang, Yue-Yang
    Fu, Zhu-Mu
    Chen, Qi-Hong
    Tao, Fa-Zhan
    Kongzhi Lilun Yu Yingyong/Control Theory and Applications, 2024, 41 (10): : 1831 - 1841
  • [24] A comparative study of deep reinforcement learning based energy management strategy for hybrid electric vehicle
    Wang, Zexing
    He, Hongwen
    Peng, Jiankun
    Chen, Weiqi
    Wu, Changcheng
    Fan, Yi
    Zhou, Jiaxuan
    ENERGY CONVERSION AND MANAGEMENT, 2023, 293
  • [25] Reinforcement Learning Based Energy Management in Hybrid Electric Vehicle
    Gole, Tejal
    Hange, Ananda
    Dhar, Rakshita
    Bhurke, Anish
    Kazi, Faruk
    2019 INTERNATIONAL CONFERENCE ON POWER ELECTRONICS, CONTROL AND AUTOMATION (ICPECA-2019), 2019, : 419 - 423
  • [26] Deep dyna reinforcement learning based energy management system for solar operated hybrid electric vehicle using load scheduling technique
    Ghode, Shilpa Dnyaneshwar
    Digalwar, Mayuri
    JOURNAL OF ENERGY STORAGE, 2024, 102
  • [27] Electric vehicle scheduling based on stochastic trip time and energy consumption
    Shen, Yindong
    Li, Yuanyuan
    Chen, Chen
    Li, Jingpeng
    COMPUTERS & INDUSTRIAL ENGINEERING, 2023, 177
  • [28] ADSA: A Multi-path Transmission Scheduling Algorithm based on Deep Reinforcement Learning in Vehicle Networks
    Yin, Chenyang
    Dong, Ping
    Du, Xiaojiang
    Zhang, Yuyang
    Zhang, Hongke
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 5058 - 5063
  • [29] Deep Reinforcement Learning based Energy Scheduling for Edge Computing
    Yang, Qinglin
    Li, Peng
    2020 IEEE INTERNATIONAL CONFERENCE ON SMART CLOUD (SMARTCLOUD 2020), 2020, : 175 - 180
  • [30] Energy management based on reinforcement learning with double deep Q-learning for a hybrid electric tracked vehicle
    Han, Xuefeng
    He, Hongwen
    Wu, Jingda
    Peng, Jiankun
    Li, Yuecheng
    APPLIED ENERGY, 2019, 254