Approximate Cost-Optimal Energy Management of Hydrogen Electric Multiple Unit Trains Using Double Q-Learning Algorithm

被引:74
|
作者
Li, Qi [1 ]
Meng, Xiang [1 ]
Gao, Fei [3 ,4 ]
Zhang, Guorui [2 ]
Chen, Weirong [1 ]
机构
[1] Southwest Jiaotong Univ, Dept Elect Engn, Chengdu 611756, Sichuan, Peoples R China
[2] CRRC Qingdao Sifang Co Ltd, Qingdao 266111, Shandong, Peoples R China
[3] Univ Bourgogne Franche Comte, FEMTO ST Inst, Rue Ernest Thierry Mieg, F-90010 Belfort, France
[4] Univ Bourgogne Franche Comte, FCLAB, UTBM, CNRS, Rue Ernest Thierry Mieg, F-90010 Belfort, France
关键词
Fuel cells; Energy management; Optimization; Hydrogen; Resistance; Batteries; Hybrid power systems; fuel cell; hydrogen; rail transportation; POWER MANAGEMENT; HYBRID; STRATEGY; OPTIMIZATION; OPERATION; BATTERY; CONSUMPTION; VEHICLE; SYSTEM;
D O I
10.1109/TIE.2021.3113021
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Energy management strategy (EMS) is the key to the performance of fuel cell / battery hybrid system. At present, reinforcement learning (RL) has been introduced into this field and has gradually become the focus of research. However, traditional EMSs only take the energy consumption into consideration when optimizing the operation economy, and ignore the cost caused by power source degradations. It would cause the problem of poor operation economy regarding Total Cost of Ownership (TCO). On the other hand, most studied RL algorithms have the disadvantages of overestimation and improper way of restricting battery SOC, which would lead to relatively poor control performance as well. To solve these problems, this paper establishes a TCO model including energy consumption, equivalent energy consumption and degradation of power sources at first, then adopt the Double Q-learning RL algorithm with state constraint and variable action space to determine the optimal EMS. Finally, using hardware-in-the-loop platform, the feasibility, superiority and generalization of proposed EMS is proved by comparing with the optimal dynamic programming and traditional RL EMS and equivalent consumption minimum strategy (ECMS) under both training and unknown operating conditions. Results prove that the proposed strategy has high global optimality and excellent SOC control ability regardless of training or unknown conditions.
引用
收藏
页码:9099 / 9110
页数:12
相关论文
共 50 条
  • [41] Multiagent-based secure energy management for multimedia grid communication using Q-learning
    Kumari, Aparna
    Tanwar, Sudeep
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (25) : 36645 - 36665
  • [42] Dynamic Energy Management for Perpetual Operation of Energy Harvesting Wireless Sensor Node Using Fuzzy Q-Learning
    Hsu, Roy Chaoming
    Lin, Tzu-Hao
    Su, Po-Cheng
    ENERGIES, 2022, 15 (09)
  • [43] Energy aware optimal routing model for wireless multimedia sensor networks using modified Voronoi assisted prioritized double deep Q-learning
    Suseela, Sellamuthu
    Krithiga, Ravi
    Revathi, Muthusamy
    Sudhakaran, Gajendran
    Bhavadharini, Reddiyapalayam Murugeshan
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2024, 36 (06):
  • [44] Energy-Efficient Power Control for Multiple-Relay Cooperative Networks Using Q-Learning
    Shams, Farshad
    Bacci, Giacomo
    Luise, Marco
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2015, 14 (03) : 1567 - 1580
  • [45] Energy-Aware Power Control for a Multiple-Relay Cooperative Network using Q-Learning
    Shams, Farshad
    Bacci, Giacomo
    Luise, Marco
    2014 9TH INTERNATIONAL CONFERENCE ON COGNITIVE RADIO ORIENTED WIRELESS NETWORKS AND COMMUNICATIONS (CROWNCOM), 2014, : 417 - 422
  • [46] A Novel Energy Management Strategy Based on Dual Reward Function Q-learning for Fuel Cell Hybrid Electric Vehicle
    Zhang, Yuxiang
    Ma, Rui
    Zhao, Dongdong
    Huangfu, Yigeng
    Liu, Weiguo
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2022, 69 (02) : 1537 - 1547
  • [47] Discrete control model Q-learning for an energy storage system with a hydrogen unit of an autonomous hybrid power plant of a railway substation
    Matrenin, Pavel
    Ghulomzoda, Anvari
    Safaraliev, Murodbek
    Tavlintsev, Alexander
    INTERNATIONAL JOURNAL OF HYDROGEN ENERGY, 2024, 93 : 704 - 714
  • [48] Deep Q-Learning Based Energy Management Strategy for a Series Hybrid Electric Tracked Vehicle and Its Adaptability Validation
    He, Dingbo
    Zou, Yuan
    Wu, Jinlong
    Zhang, Xudong
    Zhang, Zhigang
    Wang, Ruizhi
    2019 IEEE TRANSPORTATION ELECTRIFICATION CONFERENCE AND EXPO (ITEC), 2019,
  • [49] Real-Time Energy Management for Plug-in Hybrid Electric Vehicles via Incorporating Double-Delay Q-Learning and Model Prediction Control
    Shen, Shiquan
    Gao, Shun
    Liu, Yonggang
    Zhang, Yuanjian
    Shen, Jiangwei
    Chen, Zheng
    Lei, Zhenzhen
    IEEE ACCESS, 2022, 10 : 131076 - 131089
  • [50] A Q-Learning Game-Theory-Based Algorithm to Improve the Energy Efficiency of a Multiple Relay-Aided Network
    Shams, Farshad
    Bacci, Giacomo
    Luise, Marco
    2014 XXXITH URSI GENERAL ASSEMBLY AND SCIENTIFIC SYMPOSIUM (URSI GASS), 2014,