Q-Learning-Based Model Predictive Control for Energy Management in Residential Aggregator

被引:45
|
作者
Ojand, Kianoosh [1 ]
Dagdougui, Hanane [1 ,2 ]
机构
[1] Polytech Montreal, Dept Math & Ind Engn, Montreal, PQ H3T 1J4, Canada
[2] GERAD Res Ctr, Montreal, PQ H3T 2A7, Canada
关键词
HVAC; State of charge; Buildings; Uncertainty; Real-time systems; Load modeling; Energy management systems; Demand response (DR); distributed energy resources (DERs); electric vehicle (EVs); mixed-integer linear programming (MILP); model predictive control (MPC); reinforcement learning; residential community; thermostatically controlled loads (TCLs); DEMAND RESPONSE; BUILDINGS; STRATEGY; NETWORK;
D O I
10.1109/TASE.2021.3091334
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This article presents a demand response scheduling model in a residential community using an energy management system aggregator. The aggregator manages a set of resources, including photovoltaic system, energy storage system, thermostatically controllable loads, and electrical vehicles. The solution aims to dynamically control the power demand and distributed energy resources to improve the matching performance between the renewable power generation and the consumption at the community level while trading electricity in both day-ahead and real-time markets to reduce the operational costs in the aggregator. The problem can be formulated as a mixed-integer linear programming problem in which the objective is to minimize the operation and the degradation costs related to the energy storage system and the electric vehicles batteries. To mitigate the uncertainties associated with system operation, a two-level model predictive control (MPC) integrating Q-learning reinforcement learning model is designed to address different time-scale controllers. MPC algorithm allows making decisions for the day-ahead, based on predictions of uncertain parameters, whereas Q-learning algorithm addresses real-time decisions based on real-time data. The problem is solved for various sets of houses. Results demonstrated that houses can gain more benefits when they are operating in the aggregate mode.
引用
收藏
页码:70 / 81
页数:12
相关论文
共 50 条
  • [41] The Energy Management System Based on Model Predictive Control for Microgrid
    Wu, Ming
    Kou, Lingfeng
    Hou, Xiaogang
    Tong, Xin
    Rui, Tao
    Shen, Weixiang
    JOINT INTERNATIONAL CONFERENCE ON ENERGY, ECOLOGY AND ENVIRONMENT ICEEE 2018 AND ELECTRIC AND INTELLIGENT VEHICLES ICEIV 2018, 2018,
  • [42] A deep q-learning-based optimization of the inventory control in a linear process chain
    M.-A. Dittrich
    S. Fohlmeister
    Production Engineering, 2021, 15 : 35 - 43
  • [43] A Q-learning based optimization method of energy management for peak load control of residential areas with CCHP systems
    Chen, Lingmin
    Wu, Jiekang
    Tang, Huiling
    Feng, Jin
    Wang, Yanan
    ELECTRIC POWER SYSTEMS RESEARCH, 2023, 214
  • [44] Hierarchical Distributed Q-learning-based resource allocation and UBS control in SATIN
    Jeon, Kakyeom
    Lee, Howon
    2024 IEEE 21ST CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE, CCNC, 2024, : 1094 - 1095
  • [45] Q-learning-based practical disturbance compensation control for hypersonic flight vehicle
    Li, Xu
    Zhang, Ziyi
    Ji, Yuehui
    Liu, Junjie
    Gao, Qiang
    PROCEEDINGS OF THE INSTITUTION OF MECHANICAL ENGINEERS PART G-JOURNAL OF AEROSPACE ENGINEERING, 2023, 237 (08) : 1916 - 1929
  • [46] Q-learning-based congestion control strategy for information-centric networking
    Meng, Wei
    Zhang, Lingling
    INTERNET TECHNOLOGY LETTERS, 2021, 4 (05)
  • [47] Q-Learning-Based Multi-Rate Optimal Control for Process Industries
    Xia, Zhenxing
    Hu, Mengjie
    Dai, Wei
    Yan, Huaicheng
    Ma, Xiaoping
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2023, 70 (06) : 2006 - 2010
  • [48] Deploying SDN Control in Internet of UAVs: Q-Learning-Based Edge Scheduling
    Zhang, Chaofeng
    Dong, Mianxiong
    Ota, Kaoru
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2021, 18 (01): : 526 - 537
  • [49] A deep q-learning-based optimization of the inventory control in a linear process chain
    Dittrich, M. -A.
    Fohlmeister, S.
    PRODUCTION ENGINEERING-RESEARCH AND DEVELOPMENT, 2021, 15 (01): : 35 - 43
  • [50] Q-Learning-Based Supervisory Control Adaptability Investigation for Hybrid Electric Vehicles
    Xu, Bin
    Tang, Xiaolin
    Hu, Xiaosong
    Lin, Xianke
    Li, Huayi
    Rathod, Dhruvang
    Wang, Zhe
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (07) : 6797 - 6806