Improve the Stability and Robustness of Power Management through Model-free Deep Reinforcement Learning

被引:0
|
作者
Chen, Lin [1 ]
Li, Xiao [1 ]
Xu, Jiang [1 ,2 ]
机构
[1] Hong Kong Univ Sci & Technol, Dept Elect & Comp Engn, Hong Kong, Peoples R China
[2] Hong Kong Univ Sci & Technol, Microelect Thrust, Hong Kong, Peoples R China
关键词
power management; deep reinforcement learning; experience replay; federated learning; multicore system;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Achieving high performance with low energy consumption has become a primary design objective in multi-core systems. Recently, power management based on reinforcement learning has shown great potential in adapting to dynamic environments without much prior knowledge. However, conventional Q-learning (QL) algorithms adopted in most existing works encounter serious problems about scalability, instability, and overestimation. In this paper, we present a deep reinforcement learning-based approach to improve the stability and robustness of power management while reducing the energy-delay product (EDP) under user-specified performance requirements. The comprehensive status of the system is monitored periodically, making our controller sensitive to environmental change. To further improve the learning effectiveness, knowledge sharing among multiple devices is implemented in our approach. Experimental results on multiple realistic applications show that the proposed method can reduce the instability up to 68% compared with QL. Through knowledge sharing among multiple devices, our federated approach achieves around 4.8% EDP improvement over QL on average.
引用
收藏
页码:1371 / 1376
页数:6
相关论文
共 50 条
  • [21] Model-Free Active Exploration in Reinforcement Learning
    Russo, Alessio
    Proutiere, Alexandre
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [22] Model-Free Trajectory Optimization for Reinforcement Learning
    Akrour, Riad
    Abdolmaleki, Abbas
    Abdulsamad, Hany
    Neumann, Gerhard
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 48, 2016, 48
  • [23] Model-Free Quantum Control with Reinforcement Learning
    Sivak, V. V.
    Eickbusch, A.
    Liu, H.
    Royer, B.
    Tsioutsios, I
    Devoret, M. H.
    [J]. PHYSICAL REVIEW X, 2022, 12 (01):
  • [24] Online Nonstochastic Model-Free Reinforcement Learning
    Ghai, Udaya
    Gupta, Arushi
    Xia, Wenhan
    Singh, Karan
    Hazan, Elad
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [25] Model-Free Reinforcement Learning Algorithms: A Survey
    Calisir, Sinan
    Pehlivanoglu, Meltem Kurt
    [J]. 2019 27TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2019,
  • [26] Intelligent Multi-Microgrid Energy Management Based on Deep Neural Network and Model-Free Reinforcement Learning
    Du, Yan
    Li, Fangxing
    [J]. IEEE TRANSACTIONS ON SMART GRID, 2020, 11 (02) : 1066 - 1076
  • [27] Model-free data-driven approach assisted Deep Reinforcement Learning for Optimal Energy Management in MicroGrid
    Kaewdornhan, Niphon
    Chatthaworn, Rongrit
    [J]. ENERGY REPORTS, 2023, 9 : 850 - 858
  • [28] Model-free data-driven approach assisted Deep Reinforcement Learning for Optimal Energy Management in MicroGrid
    Kaewdornhan, Niphon
    Chatthaworn, Rongrit
    [J]. ENERGY REPORTS, 2023, 9 : 850 - 858
  • [29] Learning explainable task-relevant state representation for model-free deep reinforcement learning
    Zhao, Tingting
    Li, Guixi
    Zhao, Tuo
    Chen, Yarui
    Xie, Ning
    Niu, Gang
    Sugiyama, Masashi
    [J]. NEURAL NETWORKS, 2024, 180
  • [30] Deriving a Near-optimal Power Management Policy Using Model-Free Reinforcement Learning and Bayesian Classification
    Wang, Yanzhi
    Xie, Qing
    Ammari, Ahmed
    Pedram, Massoud
    [J]. PROCEEDINGS OF THE 48TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2011, : 41 - 46