Smart Battery Swapping Control for an Electric Motorcycle Fleet With Peak Time Based on Deep Reinforcement Learning

被引:0
|
作者
Park, YoonShik [1 ]
Zu, Seungdon [2 ]
Xie, Chi [3 ]
Lee, Hyunwoo [4 ]
Cheong, Taesu [5 ]
Lu, Qing-Chang [6 ]
Xu, Meng [7 ]
机构
[1] Korea Univ, Sch Ind & Management Engn, Seoul 02841, South Korea
[2] Zentropy, Seoul 06067, South Korea
[3] Tongji Univ, Dept Transportat Informat & Control Engn, Shanghai 201804, Peoples R China
[4] Virginia Tech, Grado Dept Ind & Syst Engn, Blacksburg, VA 24060 USA
[5] Korea Univ, Sch Ind & Management Engn, Seoul 02841, South Korea
[6] Changan Univ, Sch Elect & Control Engn, Dept Traff Informat & Control Engn, Xian 710064, Peoples R China
[7] Beijing Jiaotong Univ, Sch Syst Sci, Beijing 100044, Peoples R China
基金
新加坡国家研究基金会; 中国国家自然科学基金;
关键词
Electric motorcycle control; charging station recommendation; deep reinforcement learning; parameter sharing; decentralized multi-agent reinforcement learning; TAXI;
D O I
10.1109/TITS.2024.3469110
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
This study proposes a deep Q-network (DQN) model for electric motorcycles (EMs) and a multi-agent reinforcement learning (MARL)-based central control system to support battery swapping decision-making in the delivery business. We aim to minimize expected delivery losses, especially in scenarios where delivery requests are randomly and independently generated for each EM, with fluctuating time distributions and limited BSS capacity. Our MARL benefits from a reservation mechanism and a profit-aggregated central system, which greatly reduces the complexity of MARL. Furthermore, to address the inherent non-stationary problems of MARL, we propose a decentralized agent-based MARL framework, named Decentralized Agents, Centralized Learning Deep Q Network. This framework, leveraging a tailored learning algorithm, achieves peak-averse behavior, reducing delivery losses. Additionally, we introduce a hybrid approach that combines the resulting DQN algorithm for determining when to visit the BSS, and a greedy algorithm for deciding which BSS to visit. Computational experiments using real-world delivery data are conducted to evaluate the performance of our algorithm. The results demonstrate that the hybrid approach maximizes the overall profit of the entire EM fleet in a challenging environment with limited BSS capacity.
引用
收藏
页码:20175 / 20189
页数:15
相关论文
共 50 条
  • [21] Routing and Charging Scheduling for EV Battery Swapping Systems: Hypergraph-Based Heterogeneous Multiagent Deep Reinforcement Learning
    Mao, Shuai
    Jin, Jiangliang
    Xu, Yunjian
    IEEE TRANSACTIONS ON SMART GRID, 2024, 15 (05) : 4903 - 4916
  • [22] Deep Reinforcement Learning for Electric Transmission Voltage Control
    Thayer, Brandon L.
    Overbye, Thomas J.
    2020 IEEE ELECTRIC POWER AND ENERGY CONFERENCE (EPEC), 2020,
  • [23] Smart Scheduling of Electric Vehicles Based on Reinforcement Learning
    Viziteu, Andrei
    Furtuna, Daniel
    Robu, Andrei
    Senocico, Stelian
    Cioata, Petru
    Remus Baltariu, Marian
    Filote, Constantin
    Raboaca, Maria Simona
    SENSORS, 2022, 22 (10)
  • [24] Deep reinforcement learning control of electric vehicle charging in the of
    Dorokhova, Marina
    Martinson, Yann
    Ballif, Christophe
    Wyrsch, Nicolas
    APPLIED ENERGY, 2021, 301
  • [25] Deep reinforcement learning-based energy management of hybrid battery systems in electric vehicles
    Li, Weihan
    Cui, Han
    Nemeth, Thomas
    Jansen, Jonathan
    Uenluebayir, Cem
    Wei, Zhongbao
    Zhang, Lei
    Wang, Zhenpo
    Ruan, Jiageng
    Dai, Haifeng
    Wei, Xuezhe
    Sauer, Dirk Uwe
    JOURNAL OF ENERGY STORAGE, 2021, 36
  • [26] A Constraint-Based Routing and Charging Methodology for Battery Electric Vehicles With Deep Reinforcement Learning
    Zhang, Ying
    Li, Muyang
    Chen, Yuanchang
    Chiang, Yao-Yi
    Hua, Yunpeng
    IEEE TRANSACTIONS ON SMART GRID, 2023, 14 (03) : 2446 - 2459
  • [27] An Adaptive Control Framework or Dynamically Reconfigurable Battery Systems Based on Deep Reinforcement Learning
    Yang, Feng
    Gao, Fei
    Liu, Baochang
    Ci, Song
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2022, 69 (12) : 12980 - 12987
  • [28] Short-term electric vehicle battery swapping demand prediction: Deep learning methods
    Wang, Shengyou
    Chen, Anthony
    Wang, Pinxi
    Zhuge, Chengxiang
    TRANSPORTATION RESEARCH PART D-TRANSPORT AND ENVIRONMENT, 2023, 119
  • [29] OCTOPUS: Deep Reinforcement Learning for Holistic Smart Building Control
    Ding, Xianzhong
    Du, Wan
    Cerpa, Alberto
    BUILDSYS'19: PROCEEDINGS OF THE 6TH ACM INTERNATIONAL CONFERENCE ON SYSTEMS FOR ENERGY-EFFICIENT BUILDINGS, CITIES, AND TRANSPORTATION, 2019, : 326 - 335
  • [30] Exploring Deep Reinforcement Learning for Holistic Smart Building Control
    Ding, Xianzhong
    Cerpa, Alberto
    Du, Wan
    ACM TRANSACTIONS ON SENSOR NETWORKS, 2024, 20 (03)