Deep reinforcement learning based low energy consumption scheduling approach design for urban electric logistics vehicle networks

被引:0
|
作者
Sun, Pengfei [1 ,2 ,3 ]
He, Jingbo [1 ]
Wan, Jianxiong [1 ,2 ]
Guan, Yuxin [1 ,2 ]
Liu, Dongjiang [1 ,2 ]
Su, Xiaoming [1 ,2 ]
Li, Leixiao [1 ,2 ]
机构
[1] Inner Mongolia Univ Technol, Coll Data Sci & Applicat, Hohhot 010080, Peoples R China
[2] Inner Mongolia Univ Technol, Inner Mongolia Key Lab Beijiang Cyberspace Secur, Hohhot 010080, Peoples R China
[3] Inner Mongolia Univ, Coll Comp Sci, Hohhot 010021, Peoples R China
来源
SCIENTIFIC REPORTS | 2025年 / 15卷 / 01期
基金
中国国家自然科学基金;
关键词
Urban electric logistics vehicle networks; Low energy consumption scheduling; Heterogeneous attention model; Deep reinforcement learning; OPTIMIZATION; SEARCH;
D O I
10.1038/s41598-025-92916-7
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
The rapid increase in carbon emissions from the logistics transportation industry has underscored the urgent need for low-carbon logistics solutions. Electric logistics vehicles (ELVs) are increasingly being considered as replacements for traditional fuel-powered vehicles to reduce emissions in urban logistics. However, ELVs are typically limited by their battery capacity and load constraints. Additionally, effective scheduling of charging and the management of transportation duration are critical factors that must be addressed. This paper addresses low energy consumption scheduling (LECS) problem, which aims to minimize the total energy consumption of heterogeneous ELVs with varying load and battery capacities, considering the availability of multiple charging stations (CSs). Given that the complexity of LECS problem, this study proposes a heterogeneous attention model based on encoder-decoder architecture (HAMEDA) approach, which employs a heterogeneous graph attention network and introduces a novel decoding procedure to enhance solution quality and learning efficiency during the encoding and decoding phases. Trained via deep reinforcement learning (DRL) in an unsupervised manner, HAMEDA is adept at autonomously deriving optimal transportation routes for each ELV from specific cases presented. Comprehensive simulations have verified that HAMEDA can diminish overall energy utilization by no less than 1.64% compared with other traditional heuristic or learning-based algorithms. Additionally, HAMEDA excels in maintaining an advantageous equilibrium between execution speed and the quality of solutions, rendering it exceptionally apt for expansive tasks that necessitate swift decision-making processes.
引用
收藏
页数:18
相关论文
共 50 条
  • [31] Hybrid Electric Vehicle Energy Management With Computer Vision and Deep Reinforcement Learning
    Wang, Yong
    Tan, Huachun
    Wu, Yuankai
    Peng, Jiankun
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2021, 17 (06) : 3857 - 3868
  • [32] Energy scheduling for DoS attack over multi-hop networks: Deep reinforcement learning approach
    Yang, Lixin
    Tao, Jie
    Liu, Yong-Hua
    Xu, Yong
    Su, Chun-Yi
    NEURAL NETWORKS, 2023, 161 : 735 - 745
  • [33] Ecological cruising control of connected electric vehicle: a deep reinforcement learning approach
    Wang, Qun
    Ju, Fei
    Zhuang, WeiChao
    Wang, LiangMo
    SCIENCE CHINA-TECHNOLOGICAL SCIENCES, 2022, 65 (03) : 529 - 540
  • [34] Ecological cruising control of connected electric vehicle: A deep reinforcement learning approach
    WANG Qun
    JU Fei
    ZHUANG WeiChao
    WANG LiangMo
    Science China(Technological Sciences), 2022, 65 (03) : 529 - 540
  • [35] Ecological cruising control of connected electric vehicle: a deep reinforcement learning approach
    Qun Wang
    Fei Ju
    WeiChao Zhuang
    LiangMo Wang
    Science China Technological Sciences, 2022, 65 : 529 - 540
  • [36] Ecological cruising control of connected electric vehicle: A deep reinforcement learning approach
    WANG Qun
    JU Fei
    ZHUANG WeiChao
    WANG LiangMo
    Science China(Technological Sciences), 2022, (03) : 529 - 540
  • [37] Deep reinforcement learning based optimization of automated guided vehicle time and energy consumption in a container terminal
    Drungilas, Darius
    Kurmis, Mindaugas
    Senulis, Audrius
    Lukosius, Zydrunas
    Andziulis, Arunas
    Januteniene, Jolanta
    Bogdevicius, Marijonas
    Jankunas, Valdas
    Voznak, Miroslav
    ALEXANDRIA ENGINEERING JOURNAL, 2023, 67 : 397 - 407
  • [38] Dispatch of UAVs for Urban Vehicular Networks: A Deep Reinforcement Learning Approach
    Oubbati, Omar Sami
    Atiquzzaman, Mohammed
    Baz, Abdullah
    Alhakami, Hosam
    Ben-Othman, Jalel
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2021, 70 (12) : 13174 - 13189
  • [39] Online EVs Vehicle-to-Grid Scheduling Coordinated with Multi-Energy Microgrids: A Deep Reinforcement Learning-Based Approach
    Pan, Weiqi
    Yu, Xiaorong
    Guo, Zishan
    Qian, Tao
    Li, Yang
    ENERGIES, 2024, 17 (11)
  • [40] Cross-Type Transfer for Deep Reinforcement Learning Based Hybrid Electric Vehicle Energy Management
    Lian, Renzong
    Tan, Huachun
    Peng, Jiankun
    Li, Qin
    Wu, Yuankai
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (08) : 8367 - 8380