Offline-Online Reinforcement Learning for Energy Pricing in Office Demand Response: Lowering Energy and Data Costs

被引:5
|
作者
Jang, Doseok [1 ]
Spangher, Lucas [1 ]
Srivistava, Tarang [1 ]
Khattar, Manan [1 ]
Agwan, Utkarsha [1 ]
Nadarajah, Selvaprabu [2 ]
Spanos, Costas [1 ]
机构
[1] Univ Calif Berkeley, Dept Elect Engn & Comp Sci, Berkeley, CA 94720 USA
[2] Univ Illinois, Dept Informat & Decis Sci, Chicago, IL 60680 USA
基金
新加坡国家研究基金会;
关键词
prosumer; aggregation; reinforcement learning; microgrid; transactive energy; MANAGEMENT; RESOURCE;
D O I
10.1145/3486611.3486668
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Our team is proposing to run a full-scale energy demand response experiment in an office building. Although this is an exciting endeavor which will provide value to the community, collecting training data for the reinforcement learning agent is costly and will be limited. In this work, we examine how offline training can be leveraged to minimize data costs (accelerate convergence) and program implementation costs. We present two approaches to doing so: pretraining our model to warm start the experiment with simulated tasks, and using a planning model trained to simulate the real world's rewards to the agent. We present results that demonstrate the utility of offline reinforcement learning to efficient price-setting in the energy demand response problem.
引用
收藏
页码:131 / 139
页数:9
相关论文
共 50 条
  • [1] Offline-Online Reinforcement Learning for Generalizing Demand Response Price-Setting to Energy Systems
    Jang, Doseok
    Spangher, Lucas
    Khattar, Manan
    Agwan, Utkarsha
    Nadarajah, Selvaprabu
    Spanos, Costas
    [J]. BUILDSYS'21: PROCEEDINGS OF THE 2021 ACM INTERNATIONAL CONFERENCE ON SYSTEMS FOR ENERGY-EFFICIENT BUILT ENVIRONMENTS, 2021, : 220 - 221
  • [2] RLSynC: Offline-Online Reinforcement Learning for Synthon Completion
    Baker, Frazier N.
    Chen, Ziqi
    Adu-Ampratwum, Daniel
    Ning, Xia
    [J]. JOURNAL OF CHEMICAL INFORMATION AND MODELING, 2024, 64 (17) : 6723 - 6735
  • [3] Offline Pricing and Demand Learning with Censored Data
    Bu, Jinzhi
    Simchi-Levi, David
    Wang, Li
    [J]. MANAGEMENT SCIENCE, 2023, 69 (02) : 885 - 903
  • [4] Hybrid Offline/Online Optimization for Energy Management via Reinforcement Learning
    Silvestri, Mattia
    De Filippo, Allegra
    Ruggeri, Federico
    Lombardi, Michele
    [J]. INTEGRATION OF CONSTRAINT PROGRAMMING, ARTIFICIAL INTELLIGENCE, AND OPERATIONS RESEARCH, CPAIOR 2022, 2022, 13292 : 358 - 373
  • [5] Control of Hybrid Electric Vehicle Powertrain Using Offline-Online Hybrid Reinforcement Learning
    Yao, Zhengyu
    Yoon, Hwan-Sik
    Hong, Yang-Ki
    [J]. ENERGIES, 2023, 16 (02)
  • [6] Agile Cache Replacement in Edge Computing via Offline-Online Deep Reinforcement Learning
    Wang, Zhe
    Hu, Jia
    Min, Geyong
    Zhao, Zhiwei
    Wang, Zi
    [J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2024, 35 (04) : 663 - 674
  • [7] Efficient Online Reinforcement Learning with Offline Data
    Ball, Philip J.
    Smith, Laura
    Kostrikov, Ilya
    Levine, Sergey
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [8] Reinforcement Learning based Pricing for Demand Response
    Ghasemkhani, Amir
    Yang, Lei
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS (ICC WORKSHOPS), 2018,
  • [9] COST AND PRICING .H. LOWERING ENERGY COSTS
    WEINER, M
    [J]. METAL FINISHING, 1984, 82 (10) : 71 - 72
  • [10] Offline-Online Design for Energy-Efficient IRS-Aided UAV Communications
    Wang, Tianhao
    Pang, Xiaowei
    Liu, Mingqian
    Zhao, Nan
    Nallanathan, Arumugam
    Wang, Xianbin
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (02) : 2942 - 2947