Real-World Implementation of Reinforcement Learning Based Energy Coordination for a Cluster of Households

被引:0
|
作者
Gokhale, Gargya [1 ]
Tiben, Niels [1 ]
Verwee, Marie-Sophie [1 ]
Lahariya, Manu [1 ]
Claessens, Bert [2 ]
Develder, Chris [1 ]
机构
[1] Univ Ghent, imec, IDLab, Ghent, Belgium
[2] Univ Ghent, imec, IDLab, Beebop ai, Ghent, Belgium
关键词
Demand Response; Reinforcement Learning; Building Cluster; Coordination; Advantage function; BUILDINGS; MODEL;
D O I
10.1145/3600100.3625681
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Given its substantial contribution of 40% to global power consumption, the built environment has received increasing attention to serve as a source of flexibility to assist the modern power grid. In that respect, previous research mainly focused on energy management of individual buildings. In contrast, in this paper, we focus on aggregated control of a set of residential buildings, to provide grid supporting services, that eventually should include ancillary services. In particular, we present a real-life pilot study that studies the effectiveness of reinforcement-learning (RL) in coordinating the power consumption of 8 residential buildings to jointly track a target power signal. Our RL approach relies solely on observed data from individual households and does not require any explicit building models or simulators, making it practical to implement and easy to scale. We show the feasibility of our proposed RL-based coordination strategy in a real-world setting. In a 4-week case study, we demonstrate a hierarchical control system, relying on an RL-based ranking system to select which households to activate flex assets from, and a real-time PI control-based power dispatch mechanism to control the selected assets. Our results demonstrate satisfactory power tracking, and the effectiveness of the RL-based ranks which are learnt in a purely data-driven manner.
引用
收藏
页码:347 / 351
页数:5
相关论文
共 50 条
  • [1] Real-world humanoid locomotion with reinforcement learning
    Radosavovic, Ilija
    Xiao, Tete
    Zhang, Bike
    Darrell, Trevor
    Malik, Jitendra
    Sreenath, Koushil
    [J]. SCIENCE ROBOTICS, 2024, 9 (89)
  • [2] Real-world Robot Reaching Skill Learning Based on Deep Reinforcement Learning
    Liu, Naijun
    Lu, Tao
    Cai, Yinghao
    Wang, Rui
    Wang, Shuo
    [J]. PROCEEDINGS OF THE 32ND 2020 CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2020), 2020, : 4780 - 4784
  • [3] Real-world dexterous object manipulation based deep reinforcement learning
    Yao, Qingfeng
    Wang, Jilong
    Yang, Shuyu
    [J]. arXiv, 2021,
  • [4] Simulation-Based Reinforcement Learning for Real-World Autonomous Driving
    Osinski, Blazej
    Jakubowski, Adam
    Ziecina, Pawel
    Milos, Piotr
    Galias, Christopher
    Homoceanu, Silviu
    Michalewski, Henryk
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2020, : 6411 - 6418
  • [5] Reinforcement Learning in Robotics: Applications and Real-World Challenges
    Kormushev, Petar
    Calinon, Sylvain
    Caldwell, Darwin G.
    [J]. ROBOTICS, 2013, 2 (03): : 122 - 148
  • [6] Real-World Reinforcement Learning via Multifidelity Simulators
    Cutler, Mark
    Walsh, Thomas J.
    How, Jonathan P.
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2015, 31 (03) : 655 - 671
  • [7] Offline Learning of Counterfactual Predictions for Real-World Robotic Reinforcement Learning
    Jin, Jun
    Graves, Daniel
    Haigh, Cameron
    Luo, Jun
    Jagersand, Martin
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2022), 2022, : 3616 - 3623
  • [8] Setting up a Reinforcement Learning Task with a Real-World Robot
    Mahmood, A. Rupam
    Korenkevych, Dmytro
    Komer, Brent J.
    Bergstra, James
    [J]. 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2018, : 4635 - 4640
  • [9] Real-world reinforcement learning for autonomous humanoid robot docking
    Navarro-Guerrero, Nicolas
    Weber, Cornelius
    Schroeter, Pascal
    Wermter, Stefan
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2012, 60 (11) : 1400 - 1407
  • [10] NeoRL: A Near Real-World Benchmark for Offline Reinforcement Learning
    Qin, Rong-Jun
    Zhang, Xingyuan
    Gao, Songyi
    Chen, Xiong-Hui
    Li, Zewen
    Zhang, Weinan
    Yu, Yang
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,