Interpretable Deep Reinforcement Learning for Optimizing Heterogeneous Energy Storage Systems

被引:2
|
作者
Xiong, Luolin [1 ,2 ]
Tang, Yang [1 ,2 ]
Liu, Chensheng [1 ,2 ]
Mao, Shuai [3 ]
Meng, Ke [4 ]
Dong, Zhaoyang [5 ]
Qian, Feng [1 ,2 ]
机构
[1] East China Univ Sci & Technol, Key Lab Smart Mfg Energy Chem Proc, Minist Educ, Shanghai 200237, Peoples R China
[2] East China Univ Sci & Technol, Engn Res Ctr Proc Syst Engn, Minist Educ, Shanghai 200237, Peoples R China
[3] Nantong Univ, Dept Elect Engn, Nantong 226019, Peoples R China
[4] Univ New South Wales, Sch Elect Engn & Telecommun, Sydney, NSW 2052, Australia
[5] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore 639798, Singapore
基金
中国国家自然科学基金;
关键词
Heterogeneous energy storage systems; deep reinforcement learning; pre-hoc interpretability;
D O I
10.1109/TCSI.2023.3340026
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Energy storage systems (ESS) are pivotal component in the energy market, serving as both energy suppliers and consumers. ESS operators can reap benefits from energy arbitrage by optimizing operations of storage equipment. To further enhance ESS flexibility within the energy market and improve renewable energy utilization, a heterogeneous photovoltaic-ESS (PV-ESS) is proposed, which leverages the unique characteristics of battery energy storage (BES) and hydrogen energy storage (HES). For scheduling tasks of the heterogeneous PV-ESS, a practical cost function plays a crucial role in guiding operator's strategies to maximize benefits. We develop a comprehensive cost function that takes into account degradation, capital, and operation/maintenance costs to reflect real-world scenarios. Moreover, while numerous methods excel in optimizing ESS energy arbitrage, they often rely on black-box models with opaque decision-making processes, limiting practical applicability. To overcome this limitation and enable explainable scheduling strategies, a prototype-based policy network with inherent interpretability is introduced. This network employs human-designed prototypes to guide decision-making by comparing similarities between prototypical situations and encountered situations, which allows for naturally explained scheduling strategies. Comparative results across four distinct cases demonstrate the effectiveness and practicality of our proposed pre-hoc interpretable optimization method when contrasted with black-box models.
引用
收藏
页码:910 / 921
页数:12
相关论文
共 50 条
  • [41] Operation strategy optimization of combined cooling, heating, and power systems with energy storage and renewable energy based on deep reinforcement learning
    Ruan, Yingjun
    Liang, Zhenyu
    Qian, Fanyue
    Meng, Hua
    Gao, Yuan
    [J]. JOURNAL OF BUILDING ENGINEERING, 2023, 65
  • [42] Control Strategy of Microgrid Energy Storage System Based on Deep Reinforcement Learning
    Liang H.
    Li H.
    Zhang H.
    Hu Z.
    Qin Z.
    Cao J.
    [J]. Dianwang Jishu/Power System Technology, 2021, 45 (10): : 3869 - 3876
  • [43] Methodology for Interpretable Reinforcement Learning Model for HVAC Energy Control
    Kotevska, Olivera
    Munk, Jeffrey
    Kurte, Kuldeep
    Du, Yan
    Amasyali, Kadir
    Smith, Robert W.
    Zandi, Helia
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2020, : 1555 - 1564
  • [44] Optimizing Robotic Mobile Fulfillment Systems for Order Picking Based on Deep Reinforcement Learning
    Zhu, Zhenyi
    Wang, Sai
    Wang, Tuantuan
    [J]. SENSORS, 2024, 24 (14)
  • [45] Optimizing ZX-diagrams with deep reinforcement learning
    Naegele, Maximilian
    Marquardt, Florian
    [J]. MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2024, 5 (03):
  • [46] Deep Reinforcement Learning for Optimizing Finance Portfolio Management
    Hu, Yuh-Jong
    Lin, Shang-Jen
    [J]. PROCEEDINGS 2019 AMITY INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE (AICAI), 2019, : 14 - 20
  • [47] Data-driven active corrective control in power systems: an interpretable deep reinforcement learning approach
    Li, Beibei
    Liu, Qian
    Hong, Yue
    He, Yuxiong
    Zhang, Lihong
    He, Zhihong
    Feng, Xiaoze
    Gao, Tianlu
    Yang, Li
    [J]. FRONTIERS IN ENERGY RESEARCH, 2024, 12
  • [48] Optimizing Sequential Experimental Design with Deep Reinforcement Learning
    Blau, Tom
    Bonilla, Edwin V.
    Chades, Iadine
    Dezfouli, Amir
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [49] Optimizing warfarin dosing using deep reinforcement learning
    Anzabi Zadeh, Sadjad
    Street, W. Nick
    Thomas, Barrett W.
    [J]. JOURNAL OF BIOMEDICAL INFORMATICS, 2023, 137
  • [50] Deep reinforcement learning for wind and energy storage coordination in wholesale energy and ancillary service markets
    Li, Jinhao
    Wang, Changlong
    Wang, Hao
    [J]. ENERGY AND AI, 2023, 14