Benchmarking reinforcement learning algorithms for demand response applications

被引:0
|
作者
Mbuwir, Brida, V [1 ,2 ,3 ]
Manna, Carlo [1 ,3 ]
Spiessens, Fred [1 ,3 ]
Deconinck, Geert [1 ,2 ]
机构
[1] AMO, EnergyVille, Thor Pk 8130, B-3600 Genk, Belgium
[2] Katholieke Univ Leuven, ESAT ELECTA, Kasteelpk Arenberg 10, B-3001 Leuven, Belgium
[3] AMO, Flemish Inst Technol Res VITO, Boeretang 200, B-2400 Mol, Belgium
关键词
benchmarking; control; demand response; reinforcement learning;
D O I
10.1109/isgt-europe47291.2020.9248800
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Through many recent successes in simulation and real-world projects, reinforcement learning (RL) has emerged as a promising approach for demand response applications especially in the residential setting. Reinforcement learning is a self-learning and self-adaptive technique that can be used to control flexibility providing devices by relying mainly on historical and/or real-time data rather than on system models. This paper presents a benchmark of five RL algorithms - fitted Q-iteration, policy iteration with Q-functions, double Q-learning, REINFORCE and actor-critic - and compares these with a model-based optimal, rule-based and naive control. We consider a task of controlling the operation of a heat pump (HP) for space heating in a building with a photovoltaic (PV) installation. The HP is controlled with goal of maximizing PV self-consumption and consequently, minimizing electricity cost. To evaluate the performance of these algorithms, three main indicators are considered: PV self-consumption, electricity cost and computation time. Based on simulation results in which the same number of training samples are considered, fitted Q-iteration outperforms the other RL algorithms, the naive and rule-based controls in terms of PV self-consumption and net electricity cost. However, a 7.6% decrease in PV self-consumption and 77% increase in net electricity cost is observed compared to the optimal control.
引用
收藏
页码:289 / 293
页数:5
相关论文
共 50 条
  • [1] Reinforcement learning for demand response: A review of algorithms and modeling techniques
    Vazquez-Canteli, Jose R.
    Nagy, Zoltan
    APPLIED ENERGY, 2019, 235 : 1072 - 1089
  • [2] Benchmarking Reinforcement Learning Algorithms on Island Microgrid Energy Management
    Zhang, Siyue
    Nandakumar, Srinivasan
    Pan, Quanbiao
    Yang, Ezekiel
    Migne, Romain
    Subramanian, Lalitha
    2021 IEEE PES INNOVATIVE SMART GRID TECHNOLOGIES - ASIA (ISGT ASIA), 2021,
  • [3] FGYM: Toolkit for Benchmarking FPGA based Reinforcement Learning Algorithms
    Peura, Nathaniel
    Meng, Yuan
    Kuppannagari, Sanmukh
    Prasanna, Viktor
    2021 31ST INTERNATIONAL CONFERENCE ON FIELD-PROGRAMMABLE LOGIC AND APPLICATIONS (FPL 2021), 2021, : 404 - 404
  • [4] Residential Demand Response Using Reinforcement Learning
    O'Neill, Daniel
    Levorato, Marco
    Goldsmith, Andrea
    Mitra, Urbashi
    2010 IEEE 1ST INTERNATIONAL CONFERENCE ON SMART GRID COMMUNICATIONS (SMARTGRIDCOMM), 2010, : 409 - 414
  • [5] Reinforcement Learning based Pricing for Demand Response
    Ghasemkhani, Amir
    Yang, Lei
    2018 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS (ICC WORKSHOPS), 2018,
  • [6] Application of Deep Reinforcement Learning in Demand Response
    Sun Y.
    Liu D.
    Li B.
    Xu Y.
    Dianli Xitong Zidonghua/Automation of Electric Power Systems, 2019, 43 (05): : 183 - 191
  • [7] Benchmarking Deep and Non-deep Reinforcement Learning Algorithms for Discrete Environments
    Duarte, Fernando F.
    Lau, Nuno
    Pereira, Artur
    Reis, Luis P.
    FOURTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, ROBOT 2019, VOL 2, 2020, 1093 : 263 - 275
  • [8] Benchmarking Reinforcement Learning Algorithms for the Operation of a Multi-carrier Energy System
    Bollenbacher, J.
    Rhein, B.
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, PT II, 2017, 10614 : 743 - 744
  • [9] Benchmarking Offline Reinforcement Learning
    Tittaferrante, Andrew
    Yassine, Abdulsalam
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 259 - 263
  • [10] Benchmarking for Bayesian Reinforcement Learning
    Castronovo, Michael
    Ernst, Damien
    Couetoux, Adrien
    Fonteneau, Raphael
    PLOS ONE, 2016, 11 (06):