Deep reinforcement learning based adaptive energy management for plug-in hybrid electric vehicle with double deep Q-network

被引:2
|
作者
Shi, Dehua [1 ,2 ]
Xu, Han [1 ]
Wang, Shaohua [1 ,2 ]
Hu, Jia [3 ]
Chen, Long [1 ]
Yin, Chunfang [4 ]
机构
[1] Jiangsu Univ, Automot Engn Res Inst, Zhenjiang 212013, Peoples R China
[2] Jiangsu Prov Engn Res Ctr Elect Dr Syst & Intellig, Zhenjiang 212013, Peoples R China
[3] Tongji Univ, Key Lab Rd & Traff Engn, Minist Educ, Shanghai 201804, Peoples R China
[4] Jiangsu Univ, Sch Elect & Informat Engn, Zhenjiang 212013, Peoples R China
基金
中国国家自然科学基金;
关键词
Plug-in hybrid electric vehicle; Energy management strategy; Adaptive equivalent consumption; minimization strategy; Double deep Q-network; Driving cycle information; TRAFFIC INFORMATION; SERIES-PARALLEL; STRATEGY;
D O I
10.1016/j.energy.2024.132402
中图分类号
O414.1 [热力学];
学科分类号
摘要
The equivalent consumption minimization strategy (ECMS) with pre-calibrated constant equivalence factor (EF) can ensure near global optimal solution for certain driving cycle and enable good real-time capability, but it is difficult to adapt to a wide range of driving conditions. To this end, aiming at the optimal energy management problem of a plug-in hybrid electric vehicle (PHEV), this paper proposes a deep reinforcement learning (DRL) based adaptive ECMS by combing the double deep Q-network (DDQN) and the driving cycle information. The DDQN is applied to correct the EF of the ECMS in a feed-forward manner with the battery state-of-charge (SOC) and the periodic predicted driving cycle information as inputs, and the ECMS is utilized to calculate the engine torque and gear ratio of the powertrain. The driving cycle information is represented by the average velocity, which is predicted by the historical velocity sequence based on the back-propagation (BP) neural network, and the difference of the average velocity between two continuous time windows. The hardware-in-the-loop (HIL) platform is constructed to test the performance of the proposed strategy. It is shown that the future average velocity can be well predicted by the historic velocity sequence. Both simulation and HIL test results demonstrate that the proposed adaptive ECMS based on DDQN exhibits superior performance in improving the vehicle fuel economy.
引用
收藏
页数:17
相关论文
共 50 条
  • [21] Reinforcement Learning -based Real-time Energy Management for Plug-in Hybrid Electric Vehicle with Hybrid Energy Storage System
    Cao, Jiayi
    Xiong, Rui
    [J]. PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON APPLIED ENERGY, 2017, 142 : 1896 - 1901
  • [22] Adaptive intelligent energy management system of plug-in hybrid electric vehicle
    Khayyam, Hamid
    Bab-Hadiashar, Alireza
    [J]. ENERGY, 2014, 69 : 319 - 335
  • [23] Adaptive Hierarchical Energy Management Design for a Plug-In Hybrid Electric Vehicle
    Liu, Teng
    Tang, Xiaolin
    Wang, Hong
    Yu, Huilong
    Hu, Xiaosong
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2019, 68 (12) : 11513 - 11522
  • [24] Deep Q-learning network based trip pattern adaptive battery longevity-conscious strategy of plug-in fuel cell hybrid electric vehicle
    Lin, Xinyou
    Xu, Xinhao
    Wang, Zhaorui
    [J]. APPLIED ENERGY, 2022, 321
  • [25] Deep Q-learning network based trip pattern adaptive battery longevity-conscious strategy of plug-in fuel cell hybrid electric vehicle
    Lin, Xinyou
    Xu, Xinhao
    Wang, Zhaorui
    [J]. Applied Energy, 2022, 321
  • [26] Energy Management for Plug-In Hybrid Electric Vehicle Based on Adaptive Simplified-ECMS
    Zeng, Yuping
    Cai, Yang
    Kou, Guiyue
    Gao, Wei
    Qin, Datong
    [J]. SUSTAINABILITY, 2018, 10 (06)
  • [27] Deep Q-learning network based trip pattern adaptive battery longevity-conscious strategy of plug-in fuel cell hybrid electric vehicle
    Lin, Xinyou
    Xu, Xinhao
    Wang, Zhaorui
    [J]. APPLIED ENERGY, 2022, 321
  • [28] Hybrid Electric Vehicle Energy Management With Computer Vision and Deep Reinforcement Learning
    Wang, Yong
    Tan, Huachun
    Wu, Yuankai
    Peng, Jiankun
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2021, 17 (06) : 3857 - 3868
  • [29] Energy Optimization of Hybrid electric Vehicles Using Deep Q-Network
    Yokoyama, Takashi
    Ohmori, Hiromitsu
    [J]. 2022 61ST ANNUAL CONFERENCE OF THE SOCIETY OF INSTRUMENT AND CONTROL ENGINEERS (SICE), 2022, : 827 - 832
  • [30] Deep Reinforcement Learning. Case Study: Deep Q-Network
    Vrejoiu, Mihnea Horia
    [J]. ROMANIAN JOURNAL OF INFORMATION TECHNOLOGY AND AUTOMATIC CONTROL-REVISTA ROMANA DE INFORMATICA SI AUTOMATICA, 2019, 29 (03): : 65 - 78