A Deep Reinforcement Learning-Based Energy Management Strategy for Fuel Cell Hybrid Buses

被引:22
|
作者
Zheng, Chunhua [1 ]
Li, Wei [1 ,2 ]
Li, Weimin [1 ]
Xu, Kun [1 ]
Peng, Lei [1 ]
Cha, Suk Won [3 ]
机构
[1] Chinese Acad Sci, Shenzhen Inst Adv Technol, 1068 Xueyuan Ave, Shenzhen 518055, Peoples R China
[2] Univ Chinese Acad Sci, 19 A Yuquan Rd, Beijing 100049, Peoples R China
[3] Seoul Natl Univ, Sch Mech & Aerosp Engn, San 56-1, Seoul 151742, South Korea
基金
中国国家自然科学基金;
关键词
Deep reinforcement learning; Energy management strategy; Fuel cell hybrid bus; Fuel cell degradation; Reinforcement learning; PONTRYAGINS MINIMUM PRINCIPLE; POWER MANAGEMENT; VEHICLES;
D O I
10.1007/s40684-021-00403-x
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
An energy management strategy (EMS) plays an important role for hybrid vehicles, as it is directly related to the power distribution between power sources and further the energy saving of the vehicles. Currently, rule-based EMSs and optimization-based EMSs are faced with the challenge when considering the optimality and the real-time performance of the control at the same time. Along with the rapid development of the artificial intelligence, learning-based EMSs have gained more and more attention recently, which are able to overcome the above challenge. A deep reinforcement learning (DRL)-based EMS is proposed for fuel cell hybrid buses (FCHBs) in this research, in which the fuel cell durability is considered and evaluated based on a fuel cell degradation model. The action space of the DRL algorithm is limited according to the efficiency characteristic of the fuel cell in order to improve the fuel economy and the Prioritized Experience Replay (PER) is adopted for improving the convergence performance of the DRL algorithm. Simulation results of the proposed DRL-based EMS for an FCHB are compared to those of a dynamic programming (DP)-based EMS and a reinforcement learning (RL)-based EMS. Comparison results show that the fuel economy of the proposed DRL-based EMS is improved by an average of 3.63% compared to the RL-based EMS, while the difference to the DP-based EMS is within an average of 5.69%. In addition, the fuel cell degradation rate is decreased by an average of 63.49% using the proposed DRL-based EMS compared to the one without considering the fuel cell durability. Furthermore, the convergence rate of the proposed DRL-based EMS is improved by an average of 30.54% compared to the one without using the PER. Finally, the adaptability of the proposed DRL-based EMS is validated on a new driving cycle, whereas the training of the DRL algorithm is completed on the other three driving cycles.
引用
收藏
页码:885 / 897
页数:13
相关论文
共 50 条
  • [1] A Deep Reinforcement Learning-Based Energy Management Strategy for Fuel Cell Hybrid Buses
    Chunhua Zheng
    Wei Li
    Weimin Li
    Kun Xu
    Lei Peng
    Suk Won Cha
    [J]. International Journal of Precision Engineering and Manufacturing-Green Technology, 2022, 9 : 885 - 897
  • [2] Deep stochastic reinforcement learning-based energy management strategy for fuel cell hybrid electric vehicles
    Jouda, Basel
    Al-Mahasneh, Ahmad Jobran
    Abu Mallouh, Mohammed
    [J]. ENERGY CONVERSION AND MANAGEMENT, 2024, 301
  • [3] Deep reinforcement learning-based energy management strategy for fuel cell buses integrating future road information and cabin comfort control
    Jia, Chunchun
    Liu, Wei
    He, Hongwen
    Chau, K. T.
    [J]. ENERGY CONVERSION AND MANAGEMENT, 2024, 321
  • [4] Deep reinforcement learning-based energy management strategy for hybrid electric vehicles
    Zhang, Shiyi
    Chen, Jiaxin
    Tang, Bangbei
    Tang, Xiaolin
    [J]. International Journal of Vehicle Performance, 2022, 8 (01): : 31 - 45
  • [5] Deep reinforcement learning-based energy management strategy for hybrid electric vehicles
    Zhang, Shiyi
    Chen, Jiaxin
    Tang, Bangbei
    Tang, Xiaolin
    [J]. INTERNATIONAL JOURNAL OF VEHICLE PERFORMANCE, 2022, 8 (01) : 31 - 45
  • [6] Deep reinforcement learning-based energy management strategies for energy-efficient driving of hybrid electric buses
    Wang, Kunyu
    Yang, Rong
    Huang, Wei
    Mo, Jinchuan
    Zhang, Song
    [J]. PROCEEDINGS OF THE INSTITUTION OF MECHANICAL ENGINEERS PART D-JOURNAL OF AUTOMOBILE ENGINEERING, 2023, 237 (08) : 1790 - 1804
  • [7] A Speedy Reinforcement Learning-Based Energy Management Strategy for Fuel Cell Hybrid Vehicles Considering Fuel Cell System Lifetime
    Wei Li
    Jiaye Ye
    Yunduan Cui
    Namwook Kim
    Suk Won Cha
    Chunhua Zheng
    [J]. International Journal of Precision Engineering and Manufacturing-Green Technology, 2022, 9 : 859 - 872
  • [8] A Speedy Reinforcement Learning-Based Energy Management Strategy for Fuel Cell Hybrid Vehicles Considering Fuel Cell System Lifetime
    Li, Wei
    Ye, Jiaye
    Cui, Yunduan
    Kim, Namwook
    Cha, Suk Won
    Zheng, Chunhua
    [J]. International Journal of Precision Engineering and Manufacturing - Green Technology, 2022, 9 (03): : 859 - 872
  • [9] A Speedy Reinforcement Learning-Based Energy Management Strategy for Fuel Cell Hybrid Vehicles Considering Fuel Cell System Lifetime
    Li, Wei
    Ye, Jiaye
    Cui, Yunduan
    Kim, Namwook
    Cha, Suk Won
    Zheng, Chunhua
    [J]. INTERNATIONAL JOURNAL OF PRECISION ENGINEERING AND MANUFACTURING-GREEN TECHNOLOGY, 2022, 9 (03) : 859 - 872
  • [10] Deep reinforcement learning based energy management strategy of fuel cell hybrid railway vehicles considering fuel cell aging
    Deng, Kai
    Liu, Yingxu
    Hai, Di
    Peng, Hujun
    Löwenstein, Lars
    Pischinger, Stefan
    Hameyer, Kay
    [J]. Energy Conversion and Management, 2022, 251