Hierarchical Dynamic Power Management Using Model-Free Reinforcement Learning

被引:0
|
作者
Wang, Yanzhi [1 ]
Triki, Maryam
Lin, Xue [1 ]
Ammari, Ahmed C.
Pedram, Massoud [1 ]
机构
[1] Univ So Calif, Dept Elect Engn, Los Angeles, CA 90089 USA
关键词
Dynamic power management; reinforcement learning; Bayesian classification;
D O I
暂无
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Model-free reinforcement learning (RL) has become a promising technigue for designing a robust dynamic power management (DPM) framework that can cope with variations and uncertainties that emanate from hardware and application characteristics. Moreover, the potentially significant benefit of performing application-level scheduling as part of the system-level power management should be harnessed. This paper presents an architecture for hierarchical DPM in an embedded system composed of a processor chip and connected 110 devices (which are called system components.) The goal is to facilitate saving in the system component power consumption, which tends to dominate the total power consumption. The proposed (online) adaptive DPM technique consists of two layers: an RL-based component-level local power manager (LPM) and a system-level global power manager (GPM). The LPM performs component power and latency optimization. It employs temporal difference learning on semi-Markov decision process (SMDP) for model-free RL, and it is specifically optimized for an environment in which multiple (heterogeneous) types of applications can run in the embedded system. The GPM interacts with the CPU scheduler to perform effective application-level scheduling, thereby, enabling the LPM to do even more component power optimizations. In this hierarchical DPM framework power and latency tradeoffs of each type of application can be precisely controlled based on a user-defined parameter. Experiments show that the amount of average power saving is up to 31.1% compared to existing approaches.
引用
收藏
页码:170 / 177
页数:8
相关论文
共 50 条
  • [21] Model-free Predictive Optimal Iterative Learning Control using Reinforcement Learning
    Zhang, Yueqing
    Chu, Bing
    Shu, Zhan
    2022 AMERICAN CONTROL CONFERENCE, ACC, 2022, : 3279 - 3284
  • [22] A Dynamic Bidding Strategy Based on Model-Free Reinforcement Learning in Display Advertising
    Liu, Mengjuan
    Jiaxing, Li
    Hu, Zhengning
    Liu, Jinyu
    Nie, Xuyun
    IEEE ACCESS, 2020, 8 : 213587 - 213601
  • [23] Model-Free Adaptive Control Approach Using Integral Reinforcement Learning
    Abouheaf, Mohammed
    Gueaieb, Wail
    2019 IEEE INTERNATIONAL SYMPOSIUM ON ROBOTIC AND SENSORS ENVIRONMENTS (ROSE 2019), 2019, : 84 - 90
  • [24] Model-free LQ Control for Unmanned Helicopters using Reinforcement Learning
    Lee, Dong Jin
    Bang, Hyochoong
    2011 11TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS), 2011, : 117 - 120
  • [25] Model-free Reinforcement Learning for Spatiotemporal Tasks using Symbolic Automata
    Balakrishnan, Anand
    Jaksic, Stefan
    Aguilar, Edgar A.
    Nickovic, Dejan
    Deshmukh, Jyotirmoy, V
    2023 62ND IEEE CONFERENCE ON DECISION AND CONTROL, CDC, 2023, : 6834 - 6840
  • [26] Exploring the Potential of Model-Free Reinforcement Learning using Tsetlin Machines
    Drosdal, Didrik K.
    Grimsmo, Andreas
    Andersen, Per-Arne
    Granmo, Ole-Christoffer
    Goodwin, Morten
    2023 INTERNATIONAL SYMPOSIUM ON THE TSETLIN MACHINE, ISTM, 2023,
  • [27] Policy Learning with Constraints in Model-free Reinforcement Learning: A Survey
    Liu, Yongshuai
    Halev, Avishai
    Liu, Xin
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 4508 - 4515
  • [28] DATA-DRIVEN MODEL-FREE ITERATIVE LEARNING CONTROL USING REINFORCEMENT LEARNING
    Song, Bing
    Phan, Minh Q.
    Longman, Richard W.
    ASTRODYNAMICS 2018, PTS I-IV, 2019, 167 : 2579 - 2597
  • [29] Model-free estimation of available power using deep learning
    Gocmen, Tuhfe
    Urban, Albert Meseguer
    Liew, Jaime
    Lio, Alan Wai Hou
    WIND ENERGY SCIENCE, 2021, 6 (01) : 111 - 129
  • [30] Improving Optimistic Exploration in Model-Free Reinforcement Learning
    Grzes, Marek
    Kudenko, Daniel
    ADAPTIVE AND NATURAL COMPUTING ALGORITHMS, 2009, 5495 : 360 - 369