Reinforcement Learning-Based Power Management Policy for Mobile Device Systems

被引:6
|
作者
Kwon, Eunji [1 ]
Han, Sodam [1 ]
Park, Yoonho [1 ]
Yoon, Jongho [1 ]
Kang, Seokhyeong [1 ]
机构
[1] Pohang Univ Sci & Technol, Dept Elect & Elect Engn, Pohang 37673, South Korea
基金
新加坡国家研究基金会;
关键词
Mobile handsets; Power system management; Reinforcement learning; Performance evaluation; Quality of service; Central Processing Unit; Hardware; Q-learning; dynamic voltage; frequency scaling (DVFS); ARM bigLITTLE architecture; OS-level power management; quality of service (QoS); thread-level parallelism (TLP); ENERGY MANAGEMENT;
D O I
10.1109/TCSI.2021.3103503
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This paper presents a power management policy that utilizes reinforcement learning to increase the power efficiency of mobile device systems based on a multiprocessor system-on-a-chip (MPSoC). The proposed policy predicts a system's characteristics and learns power management controls to adapt to the variations in the system. We consider the behavioral characteristics of systems that run on mobile devices under diverse scenarios. Therefore, the policy can flexibly manage the system power regardless of the application scenario and achieve lower energy consumption without compromising the user satisfaction. The average energy per unit quality of service (QoS) of the proposed policy is lower than that of the previous six dynamic voltage/frequency scaling governors by 31.66%. Furthermore, we reduce the runtime overhead by implementing the proposed policy as hardware. We implemented the policy on the field programmable gate array (FPGA) and construct a communication interface between the central processing units (CPUs) and the hardware of the proposed policy. Decision-making by the hardware-implemented policy is 3.92 times faster than by the software-implemented policy.
引用
收藏
页码:4156 / 4169
页数:14
相关论文
共 50 条
  • [21] Reinforcement Learning-Based Control of a Power Electronic Converter
    Alfred, Dajr
    Czarkowski, Dariusz
    Teng, Jiaxin
    MATHEMATICS, 2024, 12 (05)
  • [22] Reinforcement Learning-Based Tracking Control For Wheeled Mobile Robot
    Nguyen Tan Luy
    PROCEEDINGS 2012 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2012, : 462 - 467
  • [23] Deep Reinforcement Learning-Based Method of Mobile Data Offloading
    Mochizuki, Daisuke
    Abiko, Yu
    Mineno, Hiroshi
    Saito, Takato
    Ikeda, Daizo
    Katagiri, Masaji
    2018 ELEVENTH INTERNATIONAL CONFERENCE ON MOBILE COMPUTING AND UBIQUITOUS NETWORK (ICMU 2018), 2018,
  • [24] RLC: A Reinforcement Learning-Based Charging Algorithm for Mobile Devices
    Liu, Tang
    Wu, Baijun
    Xu, Wenzheng
    Cao, Xianbo
    Peng, Jian
    Wu, Hongyi
    ACM TRANSACTIONS ON SENSOR NETWORKS, 2021, 17 (04)
  • [25] Practical considerations in reinforcement learning-based MPC for mobile robots
    Busetto, Riccardo
    Breschi, Valentina
    Vaccari, Giulio
    Formentin, Simone
    IFAC PAPERSONLINE, 2023, 56 (02): : 5787 - 5792
  • [26] Applying Deep Learning-based concepts for the detection of device misconfigurations in power systems
    Fellner, David
    Strasser, Thomas I.
    Kastner, Wolfgang
    SUSTAINABLE ENERGY GRIDS & NETWORKS, 2022, 32
  • [27] Reinforcement Learning-Based Dynamic Power Management for Energy Harvesting Wireless Sensor Network
    Hsu, Roy Chaoming
    Liu, Cheng-Ting
    Lee, Wei-Ming
    NEXT-GENERATION APPLIED INTELLIGENCE, PROCEEDINGS, 2009, 5579 : 399 - 408
  • [28] A Reinforcement Learning-based Orchestrator for Edge Computing Resource Allocation in Mobile Augmented Reality Systems
    Qian, Weiyang
    Coutinho, Rodolfo W. L.
    2023 IEEE 34TH ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS, PIMRC, 2023,
  • [29] Reinforcement Learning-Based Multi-Objective Optimization for Generation Scheduling in Power Systems
    Ebrie, Awol Seid
    Kim, Young Jin
    SYSTEMS, 2024, 12 (03):
  • [30] Reinforcement Learning-Based School Energy Management System
    Chemingui, Yassine
    Gastli, Adel
    Ellabban, Omar
    ENERGIES, 2020, 13 (23)