Meta-Reinforcement Learning-Based Transferable Scheduling Strategy for Energy Management

被引:10
|
作者
Xiong, Luolin [1 ,2 ]
Tang, Yang [1 ,2 ]
Liu, Chensheng [1 ,2 ]
Mao, Shuai [3 ]
Meng, Ke [4 ]
Dong, Zhaoyang [5 ]
Qian, Feng [1 ,2 ]
机构
[1] East China Univ Sci & Technol, Key Lab Smart Mfg Energy Chem Proc, Minist Educ, Shanghai 200237, Peoples R China
[2] East China Univ Sci & Technol, Engn Res Ctr Proc Syst Engn, Minist Educ, Shanghai 200237, Peoples R China
[3] Nantong Univ, Dept Elect Engn, Nantong 226019, Peoples R China
[4] Univ New South Wales, Sch Elect Engn & Telecommun, Sydney, NSW 2052, Australia
[5] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore 639798, Singapore
基金
中国国家自然科学基金;
关键词
Home energy management system; transferable scheduling strategies; meta-reinforcement learning; long shortterm memory; STORAGE; SYSTEMS; NETWORKS;
D O I
10.1109/TCSI.2023.3240702
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In Home Energy Management System (HEMS), the scheduling of energy storage equipment and shiftable loads has been widely studied to reduce home energy costs. However, existing data-driven methods can hardly ensure the transferability amongst different tasks, such as customers with diverse preferences, appliances, and fluctuations of renewable energy in different seasons. This paper designs a transferable scheduling strategy for HEMS with different tasks utilizing a Meta-Reinforcement Learning (Meta-RL) framework, which can alleviate data dependence and massive training time for other data-driven methods. Specifically, a more practical and complete demand response scenario of HEMS is considered in the proposed Meta-RL framework, where customers with distinct electricity preferences, as well as fluctuating renewable energy in different seasons are taken into consideration. An inner level and an outer level are integrated in the proposed Meta-RL-based transferable scheduling strategy, where the inner and the outer level ensure the learning speed and appropriate initial model parameters, respectively. Moreover, Long Short-Term Memory (LSTM) is presented to extract the features from historical actions and rewards, which can overcome the challenges brought by the uncertainties of renewable energy and the customers' loads, and enhance the robustness of scheduling strategies. A set of experiments conducted on practical data of Australia's electricity network verify the performance of the transferable scheduling strategy.
引用
收藏
页码:1685 / 1695
页数:11
相关论文
共 50 条
  • [41] Energy Storage Scheduling Optimization Strategy Based on Deep Reinforcement Learning
    Hou, Shixi
    Han, Jienan
    Liu, Xiangjiang
    Guo, Ruoshan
    Chu, Yundi
    [J]. ADVANCES IN NEURAL NETWORKS-ISNN 2024, 2024, 14827 : 33 - 44
  • [42] Adversarial Attacks Against Reinforcement Learning-Based Portfolio Management Strategy
    Chen, Yu-Ying
    Chen, Chiao-Ting
    Sang, Chuan-Yun
    Yang, Yao-Chun
    Huang, Szu-Hao
    [J]. IEEE ACCESS, 2021, 9 : 50667 - 50685
  • [43] Model-Based Meta-Reinforcement Learning for Flight With Suspended Payloads
    Belkhale, Suneel
    Li, Rachel
    Kahn, Gregory
    McAllister, Rowan
    Calandra, Roberto
    Levine, Sergey
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (02): : 1471 - 1478
  • [44] Context-Based Meta-Reinforcement Learning With Bayesian Nonparametric Models
    Bing, Zhenshan
    Yun, Yuqi
    Huang, Kai
    Knoll, Alois
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (10) : 6948 - 6965
  • [45] Uncertainty-based Meta-Reinforcement Learning for Robust Radar Tracking
    Ott, Julius
    Servadei, Lorenzo
    Mauro, Gianfranco
    Stadelmayer, Thomas
    Santra, Avik
    Wille, Robert
    [J]. 2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 1476 - 1483
  • [46] Reinforcement learning-based optimization for power scheduling in a renewable energy connected grid
    Ebrie, Awol Seid
    Kim, Young Jin
    [J]. RENEWABLE ENERGY, 2024, 230
  • [47] Reinforcement learning-based scheduling of multi-battery energy storage system
    Cheng, Guangran
    Dong, Lu
    Yuan, Xin
    Sun, Changyin
    [J]. JOURNAL OF SYSTEMS ENGINEERING AND ELECTRONICS, 2023, 34 (01) : 117 - 128
  • [48] Contrastive Learning-Based Bayes-Adaptive Meta-Reinforcement Learning for Active Pantograph Control in High-Speed Railways
    Wang, Hui
    Han, Zhiwei
    Wang, Xufan
    Wu, Yanbo
    Liu, Zhigang
    [J]. IEEE TRANSACTIONS ON TRANSPORTATION ELECTRIFICATION, 2024, 10 (01): : 2045 - 2056
  • [49] Reinforcement learning-based scheduling of multi-battery energy storage system
    CHENG Guangran
    DONG Lu
    YUAN Xin
    SUN Changyin
    [J]. Journal of Systems Engineering and Electronics, 2023, 34 (01) : 117 - 128
  • [50] Deep stochastic reinforcement learning-based energy management strategy for fuel cell hybrid electric vehicles
    Jouda, Basel
    Al-Mahasneh, Ahmad Jobran
    Abu Mallouh, Mohammed
    [J]. ENERGY CONVERSION AND MANAGEMENT, 2024, 301