Innovative energy solutions: Evaluating reinforcement learning algorithms for battery storage optimization in residential settings

被引:0
|
作者
Dou, Zhenlan [1 ]
Zhang, Chunyan [1 ]
Li, Junqiang [2 ]
Li, Dezhi [3 ]
Wang, Miao [3 ]
Sun, Lue [3 ]
Wang, Yong [2 ]
机构
[1] State Grid Shanghai Municipal Elect Power Co, Shanghai 200122, Peoples R China
[2] Nanchang Univ, Sch Informat Engn, Nanchang 330031, Peoples R China
[3] China Elect Power Res Inst, Beijing Key Lab Demand Side Multienergy Carriers O, Beijing 100192, Peoples R China
关键词
Reinforcement learning; Optimal controlling; Operation scheduling; Building energy Management; Energy storage; Solar PV system; SYSTEM; MANAGEMENT; OPERATION; BEHAVIOR; BIOMASS;
D O I
10.1016/j.psep.2024.09.123
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
The implementation of BESS (battery energy storage systems) and the efficient optimization of their scheduling are crucial research challenges in effectively managing the intermittency and volatility of solar-PV (photovoltaic) systems. Nevertheless, an examination of the existing body of knowledge uncovers notable deficiencies in the ideal arrangement of energy systems' timetables. Most models primarily concentrate on a single aim, whereas only a few tackle the intricacies of multi-objective scenarios. This study examines homes connected to the power grid equipped with a BESS and a solar PV system. It leverages four distinct reinforcement learning (RL) algorithms, selected for their unique training methodologies, to develop effective scheduling models. The findings demonstrate that the RL model using Trust Region Policy Optimization (TRPO) effectively manages the BESS and PV system despite real-world uncertainties. This case study confirms the suitability and effectiveness of this approach. The TRPO-based RL framework surpasses previous models in decision-making by choosing the most optimal BESS scheduling strategies. The TRPO model exhibited the highest mean self-sufficiency rates compared to the A3C (Asynchronous Advantage Actor-Critic), DDPG (Deep Deterministic Policy Gradient), and TAC (Twin Actor Cretic) models, surpassing them by similar to 3%, 0.72%, and 3.5%, correspondingly. This results in enhanced autonomy and economic benefits by adapting to dynamic real-world conditions. Consequently, our approach was strategically designed to deliver an optimized outcome. This framework is primarily intended for seamless integration into an automated energy plant environment, facilitating regular electricity trading among multiple buildings. Backed by initiatives like the Renewable Energy Certificate weight, this technology is expected to play a crucial role in maintaining a balance between power generation and consumption. The MILP (Mixed Integer Linear Programming) architecture achieved a self-sufficiency rate of 29.12%, surpassing the rates of A3C, TRPO, DDPG, and TAC by 2.48%, 0.64%, 2%, and 3.04%, correspondingly.
引用
收藏
页码:2203 / 2221
页数:19
相关论文
共 50 条
  • [31] Battery energy storage solutions for premium power
    Corey, GP
    ELEVENTH ANNUAL BATTERY CONFERENCE ON APPLICATIONS AND ADVANCES, 1996, : 229 - 233
  • [32] Mobile battery energy storage system control with knowledge-assisted deep reinforcement learning
    Zhao, Huan
    Liu, Zifan
    Mai, Xuan
    Zhao, Junhua
    Qiu, Jing
    Liu, Guolong
    Dong, Zhao Yang
    Ghias, Amer M. Y. M.
    Energy Conversion and Economics, 2022, 3 (06): : 381 - 391
  • [33] Optimal Scheduling of Battery Energy Storage Systems Using a Reinforcement Learning-based Approach
    Selim, Alaa
    Mo, Huadong
    Pota, Hemanshu
    Dong, Daoyi
    IFAC PAPERSONLINE, 2023, 56 (02): : 11741 - 11747
  • [34] ENHANCING BATTERY STORAGE ENERGY ARBITRAGE WITH DEEP REINFORCEMENT LEARNING AND TIME-SERIES FORECASTING
    Sage, Manuel
    Campbell, Joshua
    Zhao, Yaoyao Fiona
    PROCEEDINGS OF ASME 2024 18TH INTERNATIONAL CONFERENCE ON ENERGY SUSTAINABILITY, ES2024, 2024,
  • [35] Adaptive Duty Cycle Control for Optimal Battery Energy Storage System Charging by Reinforcement Learning
    Wiencek, Richard
    Ghosh, Sagnika
    2023 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI, 2023, : 8 - 10
  • [36] Safe Optimal Control of Battery Energy Storage Systems via Hierarchical Deep Reinforcement Learning
    Selim, Alaa
    Mo, Huadong
    Pota, Hemanshu
    Dong, Daoyi
    2024 INTERNATIONAL CONFERENCE ON SMART ENERGY SYSTEMS AND TECHNOLOGIES, SEST 2024, 2024,
  • [37] A Novel Reinforcement Learning Balance Control Strategy for Electric Vehicle Energy Storage Battery Pack
    Tang, Zhongsheng
    Yang, Xiao
    Feng, Yetao
    INTERNATIONAL JOURNAL OF LOW-CARBON TECHNOLOGIES, 2024, 19 : 1968 - 1980
  • [38] Cost-Effective Optimization of the Grid-Connected Residential Photovoltaic Battery System Based on Reinforcement Learning
    Xu, Yang
    Gao, Weijun
    Li, Yanxue
    HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES, 2024, 14 : 1 - 19
  • [39] Reinforcement Learning Optimization of the Charging of a Dicke Quantum Battery
    Erdman, Paolo Andrea
    Andolina, Gian Marcello
    Giovannetti, Vittorio
    Noe, Frank
    PHYSICAL REVIEW LETTERS, 2024, 133 (24)
  • [40] Multiagent Reinforcement Learning for Energy Management in Residential Buildings
    Ahrarinouri, Mehdi
    Rastegar, Mohammad
    Seifi, Ali Reza
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2021, 17 (01) : 659 - 666