Hierarchical Dynamic Power Management Using Model-Free Reinforcement Learning

被引:0
|
作者
Wang, Yanzhi [1 ]
Triki, Maryam
Lin, Xue [1 ]
Ammari, Ahmed C.
Pedram, Massoud [1 ]
机构
[1] Univ So Calif, Dept Elect Engn, Los Angeles, CA 90089 USA
关键词
Dynamic power management; reinforcement learning; Bayesian classification;
D O I
暂无
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Model-free reinforcement learning (RL) has become a promising technigue for designing a robust dynamic power management (DPM) framework that can cope with variations and uncertainties that emanate from hardware and application characteristics. Moreover, the potentially significant benefit of performing application-level scheduling as part of the system-level power management should be harnessed. This paper presents an architecture for hierarchical DPM in an embedded system composed of a processor chip and connected 110 devices (which are called system components.) The goal is to facilitate saving in the system component power consumption, which tends to dominate the total power consumption. The proposed (online) adaptive DPM technique consists of two layers: an RL-based component-level local power manager (LPM) and a system-level global power manager (GPM). The LPM performs component power and latency optimization. It employs temporal difference learning on semi-Markov decision process (SMDP) for model-free RL, and it is specifically optimized for an environment in which multiple (heterogeneous) types of applications can run in the embedded system. The GPM interacts with the CPU scheduler to perform effective application-level scheduling, thereby, enabling the LPM to do even more component power optimizations. In this hierarchical DPM framework power and latency tradeoffs of each type of application can be precisely controlled based on a user-defined parameter. Experiments show that the amount of average power saving is up to 31.1% compared to existing approaches.
引用
收藏
页码:170 / 177
页数:8
相关论文
共 50 条
  • [41] Model-Free Reinforcement Learning with Continuous Action in Practice
    Degris, Thomas
    Pilarski, Patrick M.
    Sutton, Richard S.
    2012 AMERICAN CONTROL CONFERENCE (ACC), 2012, : 2177 - 2182
  • [42] Dynamic Tuning of PI-Controllers based on Model-free Reinforcement Learning Methods
    Brujeni, Lena Abbasi
    Lee, Jong Min
    Shah, Sirish L.
    INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS 2010), 2010, : 453 - 458
  • [43] Robotic Table Tennis with Model-Free Reinforcement Learning
    Gao, Wenbo
    Graesser, Laura
    Choromanski, Krzysztof
    Song, Xingyou
    Lazic, Nevena
    Sanketi, Pannag
    Sindhwani, Vikas
    Jaitly, Navdeep
    2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 5556 - 5563
  • [44] MODEL-FREE ONLINE REINFORCEMENT LEARNING OF A ROBOTIC MANIPULATOR
    Sweafford, Jerry, Jr.
    Fahimi, Farbod
    MECHATRONIC SYSTEMS AND CONTROL, 2019, 47 (03): : 136 - 143
  • [45] Interactive learning for multi-finger dexterous hand: A model-free hierarchical deep reinforcement learning approach
    Li, Baojiang
    Qiu, Shengjie
    Bai, Jibo
    Wang, Bin
    Zhang, Zhekai
    Li, Liang
    Wang, Haiyan
    Wang, Xichao
    KNOWLEDGE-BASED SYSTEMS, 2024, 295
  • [46] Model-Free Reinforcement-Learning-Based Control Methodology for Power Electronic Converters
    Alfred, Dajr
    Czarkowski, Dariusz
    Teng, Jiaxin
    2021 13TH ANNUAL IEEE GREEN TECHNOLOGIES CONFERENCE GREENTECH 2021, 2021, : 81 - 88
  • [47] Model-Free Approach to Fair Solar PV Curtailment Using Reinforcement Learning
    Wei, Zhuo
    de Nijs, Frits
    Li, Jinhao
    Wang, Hao
    PROCEEDINGS OF THE 2023 THE 14TH ACM INTERNATIONAL CONFERENCE ON FUTURE ENERGY SYSTEMS, E-ENERGY 2023, 2023, : 14 - 21
  • [48] Model-free MIMO control tuning of a chiller process using reinforcement learning
    Rosdahl, Christian
    Bernhardsson, B. O.
    Eisenhower, Bryan
    SCIENCE AND TECHNOLOGY FOR THE BUILT ENVIRONMENT, 2023, 29 (08) : 782 - 794
  • [49] Model-free safe reinforcement learning for chemical processes using Gaussian processes
    Savage, Thomas
    Zhang, Dongda
    Mowbray, Max
    Chanona, Ehecatl Antonio Del Rio
    IFAC PAPERSONLINE, 2021, 54 (03): : 504 - 509
  • [50] Real-time dynamic pricing in a non-stationary environment using model-free reinforcement learning
    Rana, Rupal
    Oliveira, Fernando S.
    OMEGA-INTERNATIONAL JOURNAL OF MANAGEMENT SCIENCE, 2014, 47 : 116 - 126