An Energy Dynamic Control Algorithm Based on Reinforcement Learning for Data Centers

被引:0
|
作者
Xiang, Yao [1 ]
Yuan, Jingling [1 ]
Luo, Ruiqi [1 ]
Zhong, Xian [1 ]
Li, Tao [2 ]
机构
[1] Wuhan Univ Technol, Sch Comp Sci & Technol, Wuhan 430070, Hubei, Peoples R China
[2] Univ Florida, Dept Elect & Comp Engn, Gainesville, FL 32611 USA
基金
中国国家自然科学基金; 美国国家科学基金会;
关键词
Reinforcement learning; double Q-learning; dynamic energy control; energy cost reduction; GENERATION; MANAGEMENT; POWER; COST;
D O I
10.1142/S0218001419510091
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, how to use renewable energy to reduce the energy cost of internet data center (IDC) has been an urgent problem to be solved. More and more solutions are beginning to consider machine learning, but many of the existing methods need to take advantage of some future information, which is difficult to obtain in the actual operation process. In this paper, we focus on reducing the energy cost of IDC by controlling the energy flow of renewable energy without any future information. we propose an efficient energy dynamic control algorithm based on the theory of reinforcement learning, which approximates the optimal solution by learning the feedback of historical control decisions. For the purpose of avoiding overestimation, improving the convergence ability of the algorithm, we use the double Q-method to further optimize. The extensive experimental results show that our algorithm can on average save the energy cost by 18.3% and reduce the rate of grid intervention by 26.2% compared with other algorithms, and thus has good application prospects.
引用
收藏
页数:24
相关论文
共 50 条
  • [41] Modeling and energy dynamic control for a ZEH via hybrid model-based deep reinforcement learning
    Li, Yanxue
    Wang, Zixuan
    Xu, Wenya
    Gao, Weijun
    Xu, Yang
    Xiao, Fu
    ENERGY, 2023, 277
  • [42] A NOVEL REINFORCEMENT LEARNING BASED ROUTING ALGORITHM FOR ENERGY MANAGEMENT IN NETWORKS
    Eris, Cigdem
    Gul, Omer Melih
    Boluk, Pinar Sarisaray
    JOURNAL OF INDUSTRIAL AND MANAGEMENT OPTIMIZATION, 2024, 20 (12) : 3678 - 3696
  • [43] Deep Reinforcement Learning Algorithm Based on Optimal Energy Dispatching for Microgrid
    Bian, Haifeng
    Tian, Xin
    Zhang, Jun
    Han, Xinyang
    2020 5TH ASIA CONFERENCE ON POWER AND ELECTRICAL ENGINEERING (ACPEE 2020), 2020, : 169 - 174
  • [44] Energy-Efficient Virtual Machines Consolidation in Cloud Data Centers using Reinforcement Learning
    Farahnakian, Fahimeh
    Liljeberg, Pasi
    Plosila, Juha
    2014 22ND EUROMICRO INTERNATIONAL CONFERENCE ON PARALLEL, DISTRIBUTED, AND NETWORK-BASED PROCESSING (PDP 2014), 2014, : 500 - 507
  • [45] Dynamic Event-Triggered-Based Integral Reinforcement Learning Algorithm for Frequency Control of Microgrid With Stochastic Uncertainty
    Tong, Xin
    Ma, Dazhong
    Wang, Rui
    Xie, Xiangpeng
    Zhang, Huaguang
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2023, 69 (03) : 321 - 330
  • [46] Renewable Energy-Aware Big Data Analytics in Geo-Distributed Data Centers with Reinforcement Learning
    Xu, Chenhan
    Wang, Kun
    Li, Peng
    Xia, Rui
    Guo, Song
    Guo, Minyi
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2020, 7 (01): : 205 - 215
  • [47] Ship steering control based on SA-reinforcement learning algorithm
    Automation and Elec. Eng. College, Dalian Maritime Univ., Dalian 116026, China
    Xitong Fangzhen Xuebao, 2006, 5 (1278-1282):
  • [48] A Car-following Control Algorithm Based on Deep Reinforcement Learning
    Zhu B.
    Jiang Y.-D.
    Zhao J.
    Chen H.
    Deng W.-W.
    Zhongguo Gonglu Xuebao/China Journal of Highway and Transport, 2019, 32 (06): : 53 - 60
  • [49] Control Algorithm of Three-Dimensional Game Based on Reinforcement Learning
    Meng L.
    Shen N.
    Qi Y.-Q.
    Zhang H.-Y.
    Meng, Lu (menglu@ise.neu.edu.cn), 1600, Northeast University (42): : 478 - 482and493
  • [50] Automatic Generation Control Based on Lagrangian Relaxation Reinforcement Learning Algorithm
    Xi L.
    Liu Z.
    Li Y.
    Zhongguo Dianji Gongcheng Xuebao/Proceedings of the Chinese Society of Electrical Engineering, 2023, 43 (04): : 1359 - 1368