Automotive fuel cell performance degradation prediction using Multi-Agent Cooperative Advantage Actor-Critic model

被引:0
|
作者
Hou, Yanzhu [1 ]
Yin, Cong [1 ,2 ]
Sheng, Xia [3 ]
Xu, Dechao [3 ]
Chen, Junxiong [1 ]
Tang, Hao [1 ,2 ]
机构
[1] Univ Elect Sci & Technol China, Sch Automat Engn, Chengdu 611731, Peoples R China
[2] Univ Elect Sci & Technol China, Hydrogen & Fuel Cell Inst, Chengdu 611731, Peoples R China
[3] Gen Inst FAW, Powertrain Dept, Changchun 130011, Peoples R China
关键词
Proton exchange membrane fuel cell; Degradation prediction; Multi-agent advantage actor-critic method; PROGNOSTIC METHOD; STACK;
D O I
10.1016/j.energy.2025.134899
中图分类号
O414.1 [热力学];
学科分类号
摘要
The performance degradation of automotive proton exchange membrane cell (PEMFC) has long been a bottleneck hindering its commercial applications. Predicting fuel cell voltage degradation is vital, as it provides practical guidance for fuel cell health management to extend its lifetime. While the Reinforcement Learning (RL) model, Advantage Actor-Critic (A2C), has shown promise in predicting degradation, its high learning costs and poor prediction stability limit its application. In this work, a fuel cell mechanism model, which extracts degradation features according to different operating conditions, is integrated into a novel Multi-Agent Cooperative Advantage Actor-Critic (MAC-A2C) framework. The extracted aging parameters under different load currents are assigned to individual agents whose respective actor networks are updated asynchronously to learn the particular degradation trends. The model is trained and validated using a dataset from a self-designed 80 kW PEMFC engine operating on a city bus for 3566 h. Results indicate that our method effectively tracks the voltage degradation trend, achieves high prediction accuracy with a mean average percentage error of 0.38 %, reduces learning costs by 87.5 %, and significantly improves prediction stability. The proposed method can offer potential solutions for online health management, thereby extending the lifespan of fuel cell vehicles.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] A New Advantage Actor-Critic Algorithm For Multi-Agent Environments
    Paczolay, Gabor
    Harmati, Istvan
    2020 23RD IEEE INTERNATIONAL SYMPOSIUM ON MEASUREMENT AND CONTROL IN ROBOTICS (ISMCR), 2020,
  • [2] Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments
    Lowe, Ryan
    Wu, Yi
    Tamar, Aviv
    Harb, Jean
    Abbeel, Pieter
    Mordatch, Igor
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [3] Multi-Agent Actor-Critic for Cooperative Resource Allocation in Vehicular Networks
    Hammami, Nessrine
    Nguyen, Kim Khoa
    PROCEEDINGS OF THE 2022 14TH IFIP WIRELESS AND MOBILE NETWORKING CONFERENCE (WMNC 2022), 2022, : 93 - 100
  • [4] Local Advantage Actor-Critic for Robust Multi-Agent Deep Reinforcement Learning
    Xiao, Yuchen
    Lyu, Xueguang
    Amato, Christopher
    2021 INTERNATIONAL SYMPOSIUM ON MULTI-ROBOT AND MULTI-AGENT SYSTEMS (MRS), 2021, : 155 - 163
  • [5] Multi-agent actor-critic with time dynamical opponent model
    Tian, Yuan
    Kladny, Klaus -Rudolf
    Wang, Qin
    Huang, Zhiwu
    Fink, Olga
    NEUROCOMPUTING, 2023, 517 : 165 - 172
  • [6] A multi-agent reinforcement learning using Actor-Critic methods
    Li, Chun-Gui
    Wang, Meng
    Yuan, Qing-Neng
    PROCEEDINGS OF 2008 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS, VOLS 1-7, 2008, : 878 - 882
  • [7] B -Level Actor-Critic for Multi-Agent Coordination
    Zhang, Haifeng
    Chen, Weizhe
    Huang, Zeren
    Li, Minne
    Yang, Yaodong
    Zhang, Weinan
    Wang, Jun
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 7325 - 7332
  • [8] Multi-agent reinforcement learning by the actor-critic model with an attention interface
    Zhang, Lixiang
    Li, Jingchen
    Zhu, Yi'an
    Shi, Haobin
    Hwang, Kao-Shing
    NEUROCOMPUTING, 2022, 471 : 275 - 284
  • [9] Asynchronous Actor-Critic for Multi-Agent Reinforcement Learning
    Xiao, Yuchen
    Tan, Weihao
    Amato, Christopher
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [10] Divergence-Regularized Multi-Agent Actor-Critic
    Su, Kefan
    Lu, Zongqing
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,