Hierarchical Dynamic Power Management Using Model-Free Reinforcement Learning

被引:0
|
作者
Wang, Yanzhi [1 ]
Triki, Maryam
Lin, Xue [1 ]
Ammari, Ahmed C.
Pedram, Massoud [1 ]
机构
[1] Univ So Calif, Dept Elect Engn, Los Angeles, CA 90089 USA
关键词
Dynamic power management; reinforcement learning; Bayesian classification;
D O I
暂无
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Model-free reinforcement learning (RL) has become a promising technigue for designing a robust dynamic power management (DPM) framework that can cope with variations and uncertainties that emanate from hardware and application characteristics. Moreover, the potentially significant benefit of performing application-level scheduling as part of the system-level power management should be harnessed. This paper presents an architecture for hierarchical DPM in an embedded system composed of a processor chip and connected 110 devices (which are called system components.) The goal is to facilitate saving in the system component power consumption, which tends to dominate the total power consumption. The proposed (online) adaptive DPM technique consists of two layers: an RL-based component-level local power manager (LPM) and a system-level global power manager (GPM). The LPM performs component power and latency optimization. It employs temporal difference learning on semi-Markov decision process (SMDP) for model-free RL, and it is specifically optimized for an environment in which multiple (heterogeneous) types of applications can run in the embedded system. The GPM interacts with the CPU scheduler to perform effective application-level scheduling, thereby, enabling the LPM to do even more component power optimizations. In this hierarchical DPM framework power and latency tradeoffs of each type of application can be precisely controlled based on a user-defined parameter. Experiments show that the amount of average power saving is up to 31.1% compared to existing approaches.
引用
下载
收藏
页码:170 / 177
页数:8
相关论文
共 50 条
  • [1] Learning Representations in Model-Free Hierarchical Reinforcement Learning
    Rafati, Jacob
    Noelle, David C.
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 10009 - 10010
  • [2] Model-Free Reinforcement Learning and Bayesian Classification in System-Level Power Management
    Wang, Yanzhi
    Pedram, Massoud
    IEEE TRANSACTIONS ON COMPUTERS, 2016, 65 (12) : 3713 - 3726
  • [3] Improve the Stability and Robustness of Power Management through Model-free Deep Reinforcement Learning
    Chen, Lin
    Li, Xiao
    Xu, Jiang
    PROCEEDINGS OF THE 2022 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2022), 2022, : 1371 - 1376
  • [4] Model-free aperiodic tracking for discrete-time systems using hierarchical reinforcement learning
    Tian, Yingqiang
    Wan, Haiying
    Karimi, Hamid Reza
    Luan, Xiaoli
    Liu, Fei
    NEUROCOMPUTING, 2024, 609
  • [5] Deriving a Near-optimal Power Management Policy Using Model-Free Reinforcement Learning and Bayesian Classification
    Wang, Yanzhi
    Xie, Qing
    Ammari, Ahmed
    Pedram, Massoud
    PROCEEDINGS OF THE 48TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2011, : 41 - 46
  • [6] Model-Free Control for Dynamic-Field Acoustic Manipulation Using Reinforcement Learning
    Latifi, Kourosh
    Kopitca, Artur
    Zhou, Quan
    IEEE ACCESS, 2020, 8 : 20597 - 20606
  • [7] Model-free Resource Management of Cloud-based applications using Reinforcement Learning
    Jin, Yue
    Bouzid, Makram
    Kostadinov, Dimitre
    Aghasaryan, Armen
    2018 21ST CONFERENCE ON INNOVATION IN CLOUDS, INTERNET AND NETWORKS AND WORKSHOPS (ICIN), 2018,
  • [8] Resource management of cloud-enabled systems using model-free reinforcement learning
    Yue Jin
    Makram Bouzid
    Dimitre Kostadinov
    Armen Aghasaryan
    Annals of Telecommunications, 2019, 74 : 625 - 636
  • [9] Resource management of cloud-enabled systems using model-free reinforcement learning
    Jin, Yue
    Bouzid, Makram
    Kostadinov, Dimitre
    Aghasaryan, Armen
    ANNALS OF TELECOMMUNICATIONS, 2019, 74 (9-10) : 625 - 636
  • [10] Model-free learning control of neutralization processes using reinforcement learning
    Syafiie, S.
    Tadeo, F.
    Martinez, E.
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2007, 20 (06) : 767 - 782