Improve the Stability and Robustness of Power Management through Model-free Deep Reinforcement Learning

被引:0
|
作者
Chen, Lin [1 ]
Li, Xiao [1 ]
Xu, Jiang [1 ,2 ]
机构
[1] Hong Kong Univ Sci & Technol, Dept Elect & Comp Engn, Hong Kong, Peoples R China
[2] Hong Kong Univ Sci & Technol, Microelect Thrust, Hong Kong, Peoples R China
关键词
power management; deep reinforcement learning; experience replay; federated learning; multicore system;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Achieving high performance with low energy consumption has become a primary design objective in multi-core systems. Recently, power management based on reinforcement learning has shown great potential in adapting to dynamic environments without much prior knowledge. However, conventional Q-learning (QL) algorithms adopted in most existing works encounter serious problems about scalability, instability, and overestimation. In this paper, we present a deep reinforcement learning-based approach to improve the stability and robustness of power management while reducing the energy-delay product (EDP) under user-specified performance requirements. The comprehensive status of the system is monitored periodically, making our controller sensitive to environmental change. To further improve the learning effectiveness, knowledge sharing among multiple devices is implemented in our approach. Experimental results on multiple realistic applications show that the proposed method can reduce the instability up to 68% compared with QL. Through knowledge sharing among multiple devices, our federated approach achieves around 4.8% EDP improvement over QL on average.
引用
收藏
页码:1371 / 1376
页数:6
相关论文
共 50 条
  • [1] Recovering Robustness in Model-Free Reinforcement Learning
    Venkataraman, Harish K.
    Seiler, Peter J.
    [J]. 2019 AMERICAN CONTROL CONFERENCE (ACC), 2019, : 4210 - 4216
  • [2] Hierarchical Dynamic Power Management Using Model-Free Reinforcement Learning
    Wang, Yanzhi
    Triki, Maryam
    Lin, Xue
    Ammari, Ahmed C.
    Pedram, Massoud
    [J]. PROCEEDINGS OF THE FOURTEENTH INTERNATIONAL SYMPOSIUM ON QUALITY ELECTRONIC DESIGN (ISQED 2013), 2013, : 170 - 177
  • [3] Model-Free Deep Inverse Reinforcement Learning by Logistic Regression
    Eiji Uchibe
    [J]. Neural Processing Letters, 2018, 47 : 891 - 905
  • [4] Model-Free Deep Inverse Reinforcement Learning by Logistic Regression
    Uchibe, Eiji
    [J]. NEURAL PROCESSING LETTERS, 2018, 47 (03) : 891 - 905
  • [5] Model-free Deep Reinforcement Learning for Urban Autonomous Driving
    Chen, Jianyu
    Yuan, Bodi
    Tomizuka, Masayoshi
    [J]. 2019 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2019, : 2765 - 2771
  • [6] On Distributed Model-Free Reinforcement Learning Control With Stability Guarantee
    Mukherjee, Sayak
    Vu, Thanh Long
    [J]. IEEE CONTROL SYSTEMS LETTERS, 2021, 5 (05): : 1615 - 1620
  • [7] On Distributed Model-Free Reinforcement Learning Control with Stability Guarantee
    Mukherjee, Sayak
    Thanh Long Vu
    [J]. 2021 AMERICAN CONTROL CONFERENCE (ACC), 2021, : 2175 - 2180
  • [8] Model-Free Reinforcement Learning and Bayesian Classification in System-Level Power Management
    Wang, Yanzhi
    Pedram, Massoud
    [J]. IEEE TRANSACTIONS ON COMPUTERS, 2016, 65 (12) : 3713 - 3726
  • [9] Model-Free Load Frequency Control of Nonlinear Power Systems Based on Deep Reinforcement Learning
    Chen, Xiaodi
    Zhang, Meng
    Wu, Zhengguang
    Wu, Ligang
    Guan, Xiaohong
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (04) : 6825 - 6833
  • [10] Deep Reinforcement Learning for Autonomous Model-Free Navigation with Partial Observability
    Tapia, Daniel
    Parras, Juan
    Zazo, Santiago
    [J]. 2019 27TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2019,