Automotive fuel cell performance degradation prediction using Multi-Agent Cooperative Advantage Actor-Critic model

被引:0
|
作者
Hou, Yanzhu [1 ]
Yin, Cong [1 ,2 ]
Sheng, Xia [3 ]
Xu, Dechao [3 ]
Chen, Junxiong [1 ]
Tang, Hao [1 ,2 ]
机构
[1] Univ Elect Sci & Technol China, Sch Automat Engn, Chengdu 611731, Peoples R China
[2] Univ Elect Sci & Technol China, Hydrogen & Fuel Cell Inst, Chengdu 611731, Peoples R China
[3] Gen Inst FAW, Powertrain Dept, Changchun 130011, Peoples R China
关键词
Proton exchange membrane fuel cell; Degradation prediction; Multi-agent advantage actor-critic method; PROGNOSTIC METHOD; STACK;
D O I
10.1016/j.energy.2025.134899
中图分类号
O414.1 [热力学];
学科分类号
摘要
The performance degradation of automotive proton exchange membrane cell (PEMFC) has long been a bottleneck hindering its commercial applications. Predicting fuel cell voltage degradation is vital, as it provides practical guidance for fuel cell health management to extend its lifetime. While the Reinforcement Learning (RL) model, Advantage Actor-Critic (A2C), has shown promise in predicting degradation, its high learning costs and poor prediction stability limit its application. In this work, a fuel cell mechanism model, which extracts degradation features according to different operating conditions, is integrated into a novel Multi-Agent Cooperative Advantage Actor-Critic (MAC-A2C) framework. The extracted aging parameters under different load currents are assigned to individual agents whose respective actor networks are updated asynchronously to learn the particular degradation trends. The model is trained and validated using a dataset from a self-designed 80 kW PEMFC engine operating on a city bus for 3566 h. Results indicate that our method effectively tracks the voltage degradation trend, achieves high prediction accuracy with a mean average percentage error of 0.38 %, reduces learning costs by 87.5 %, and significantly improves prediction stability. The proposed method can offer potential solutions for online health management, thereby extending the lifespan of fuel cell vehicles.
引用
收藏
页数:16
相关论文
共 50 条
  • [41] Approximate dynamic programming solutions of multi-agent graphical games using actor-critic network structures
    Abouheaf, Mohammed I.
    Lewis, Frank L.
    Proceedings of the International Joint Conference on Neural Networks, 2013,
  • [42] Approximate Dynamic Programming Solutions of Multi-Agent Graphical Games Using Actor-Critic Network Structures
    Abouheaf, Mohammed I.
    Lewis, Frank L.
    2013 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2013,
  • [43] PRACM: Predictive Rewards for Actor-Critic with Mixing Function in Multi-Agent Reinforcement Learning
    Yu, Sheng
    Liu, Bo
    Zhu, Wei
    Liu, Shuhong
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT IV, KSEM 2023, 2023, 14120 : 69 - 82
  • [44] HMAAC: Hierarchical Multi-Agent Actor-Critic for Aerial Search with Explicit Coordination Modeling
    Sun, Chuanneng
    Huang, Songjun
    Pompili, Dario
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023), 2023, : 7728 - 7734
  • [45] Multi-Agent Actor-Critic Multitask Reinforcement Learning based on GTD(1) with Consensus
    Stankovic, Milo S. S.
    Beko, Marko
    Ilic, Nemanja
    Stankovic, Srdjan S.
    2022 IEEE 61ST CONFERENCE ON DECISION AND CONTROL (CDC), 2022, : 4591 - 4596
  • [46] An Object Oriented Approach to Fuzzy Actor-Critic Learning for Multi-Agent Differential Games
    Schwartz, Howard
    2019 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2019), 2019, : 183 - 190
  • [47] Multi-agent Actor-Critic Reinforcement Learning Based In-network Load Balance
    Mai, Tianle
    Yao, Haipeng
    Xiong, Zehui
    Guo, Song
    Niyato, Dusit Tao
    2020 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2020,
  • [48] An actor-critic algorithm for multi-agent learning in queue-based stochastic games
    Sundar, D. Krishna
    Ravikumar, K.
    NEUROCOMPUTING, 2014, 127 : 258 - 265
  • [49] A Communication-Efficient Multi-Agent Actor-Critic Algorithm for Distributed Reinforcement Learning
    Lin, Yixuan
    Zhang, Kaiqing
    Yang, Zhuoran
    Wang, Zhaoran
    Basar, Tamer
    Sandhu, Romeil
    Liu, Ji
    2019 IEEE 58TH CONFERENCE ON DECISION AND CONTROL (CDC), 2019, : 5562 - 5567
  • [50] A Multi-Agent Off-Policy Actor-Critic Algorithm for Distributed Reinforcement Learning
    Suttle, Wesley
    Yang, Zhuoran
    Zhang, Kaiqing
    Wang, Zhaoran
    Basar, Tamer
    Liu, Ji
    IFAC PAPERSONLINE, 2020, 53 (02): : 1549 - 1554