Real-world challenges for multi-agent reinforcement learning in grid-interactive buildings

被引:16
|
作者
Nweye, Kingsley [1 ]
Liu, Bo [2 ]
Stone, Peter [2 ]
Nagy, Zoltan [1 ]
机构
[1] Univ Texas Austin, Dept Civil Architectural & Environm Engn, Intelligent Environm Lab, 301 E Dean Keeton St Stop,ECJ 4 200, Austin, TX 78712 USA
[2] Univ Texas Austin, Dept Comp Sci, 2317 Speedway,GDC 2 302, Austin, TX 78712 USA
关键词
Grid-interactive buildings; Benchmarking; Reinforcement learning; DEMAND RESPONSE;
D O I
10.1016/j.egyai.2022.100202
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Building upon prior research that highlighted the need for standardizing environments for building control research, and inspired by recently introduced challenges for real life reinforcement learning (RL) control, here we propose a non-exhaustive set of nine real world challenges for RL control in grid-interactive buildings (GIBs). We argue that research in this area should be expressed in this framework in addition to providing a standardized environment for repeatability. Advanced controllers such as model predictive control (MPC) and RL control have both advantages and disadvantages that prevent them from being implemented in real world problems. Comparisons between the two are rare, and often biased. By focusing on the challenges, we can investigate the performance of the controllers under a variety of situations and generate a fair comparison. As a demonstration, we implement the offline learning challenge in CityLearn, an OpenAI Gym environment for the easy implementation of RL agents in a demand response setting to reshape the aggregated curve of electricity demand by controlling the energy storage of a diverse set of buildings in a district. We use CityLearn to study the impact of different levels of domain knowledge and complexity of RL algorithms and show that the sequence of operations (SOOs) utilized in a rule based controller (RBC) that provides fixed logs to RL agents during offline training affect the performance of the agents when evaluated on a set of four energy flexibility metrics. Longer offline training from an optimized RBC leads to improved performance in the long run. RL agents that train on the logs from a simplified RBC risk poorer performance as the offline training period increases. We also observe no impact on performance from information sharing amongst agents. We call for a more interdisciplinary effort of the research community to address the real world challenges, and unlock the potential of GIB controllers.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Heterogeneous Multi-Agent Reinforcement Learning for Grid-Interactive Communities
    Wu, Allen
    Nweye, Kingsley
    Nagy, Zoltan
    [J]. PROCEEDINGS OF THE 10TH ACM INTERNATIONAL CONFERENCE ON SYSTEMS FOR ENERGY-EFFICIENT BUILDINGS, CITIES, AND TRANSPORTATION, BUILDSYS 2023, 2023, : 314 - 315
  • [2] SCALEX: SCALability EXploration of Multi-Agent Reinforcement Learning Agents in Grid-Interactive Efficient Buildings
    Almilaify, Yara
    Nweye, Kingsley
    Nagy, Zoltan
    [J]. PROCEEDINGS OF THE 10TH ACM INTERNATIONAL CONFERENCE ON SYSTEMS FOR ENERGY-EFFICIENT BUILDINGS, CITIES, AND TRANSPORTATION, BUILDSYS 2023, 2023, : 261 - 264
  • [3] Poster: Offline training of multi-agent reinforcement agents for grid-interactive buildings control
    Nweye, Kingsley
    Nagy, Zoltan
    Liu, Bo
    Stone, Peter
    [J]. PROCEEDINGS OF THE 2022 THE THIRTEENTH ACM INTERNATIONAL CONFERENCE ON FUTURE ENERGY SYSTEMS, E-ENERGY 2022, 2022, : 442 - 443
  • [4] Integrate multi-agent simulation environment and multi-agent reinforcement learning (MARL) for real-world scenario
    Yeo, Sangho
    Lee, Seungjun
    Choi, Boreum
    Oh, Sangyoon
    [J]. 11TH INTERNATIONAL CONFERENCE ON ICT CONVERGENCE: DATA, NETWORK, AND AI IN THE AGE OF UNTACT (ICTC 2020), 2020, : 523 - 525
  • [5] Towards Distributed Communication and Control in Real-World Multi-Agent Reinforcement Learning
    Liu, Jieyan
    Liu, Yi
    Du, Zhekai
    Lu, Ke
    [J]. IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 4974 - 4979
  • [6] Multi-Agent Deep Reinforcement Learning For Real-World Traffic Signal Controls - A Case Study
    Friesen, Maxim
    Tan, Tian
    Jasperneite, Juergen
    Wang, Jie
    [J]. 2022 IEEE 20TH INTERNATIONAL CONFERENCE ON INDUSTRIAL INFORMATICS (INDIN), 2022, : 162 - 169
  • [7] MERLIN: Multi-agent offline and transfer learning for occupant-centric operation of grid-interactive communities
    Nweye, Kingsley
    Sankaranarayanan, Siva
    Nagy, Zoltan
    [J]. APPLIED ENERGY, 2023, 346
  • [8] Multi-Agent Reinforcement Learning: A Review of Challenges and Applications
    Canese, Lorenzo
    Cardarilli, Gian Carlo
    Di Nunzio, Luca
    Fazzolari, Rocco
    Giardino, Daniele
    Re, Marco
    Spano, Sergio
    [J]. APPLIED SCIENCES-BASEL, 2021, 11 (11):
  • [9] Reinforcement Learning in Robotics: Applications and Real-World Challenges
    Kormushev, Petar
    Calinon, Sylvain
    Caldwell, Darwin G.
    [J]. ROBOTICS, 2013, 2 (03): : 122 - 148
  • [10] Smart Grid for Industry Using Multi-Agent Reinforcement Learning
    Roesch, Martin
    Linder, Christian
    Zimmermann, Roland
    Rudolf, Andreas
    Hohmann, Andrea
    Reinhart, Gunther
    [J]. APPLIED SCIENCES-BASEL, 2020, 10 (19): : 1 - 20