Power Management for Chiplet-Based Multicore Systems Using Deep Reinforcement Learning

被引:1
|
作者
Li, Xiao [1 ]
Chen, Lin [1 ]
Chen, Shixi [1 ]
Jiang, Fan [1 ]
Li, Chengeng [1 ]
Xu, Jiang [2 ,3 ]
机构
[1] Hong Kong Univ Sci & Technol, Hong Kong, Peoples R China
[2] Hong Kong Univ Sci & Technol Guangzhou, Guangzhou, Peoples R China
[3] HKUST Fok Ying Tung Res Inst, Hong Kong, Peoples R China
关键词
power delivery system; deep reinforcement learning; energy efficiency; dynamic power management; chiplet;
D O I
10.1109/ISVLSI54635.2022.00041
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The trade-off between performance and energy to achieve high energy efficiency has become a critical design issue for computing systems. For emerging chiplet-based multicore systems, the explosive growth of system complexity exacerbates the design challenge to improve the energy efficiency of both processors and power delivery systems (PDSs). Previous works that co-manage processors and PDSs based on reinforcement learning can adapt to dynamic workload variations. However, they face poor scalability and PDS efficiency degradation issues. To tackle the above problems, we propose a deep Q-network (DQN)-based online control scheme for power delivery and consumption co-management of chiplet-based multicore systems. When evaluated on realistic applications, our proposed approach with a centralized DQN agent can achieve on average 4.6% and 33.2% greater energy-delay-product (EDP) reduction over the state-of-art modular Q-learning (MQL) approach and heuristic-based approach, respectively.
引用
收藏
页码:164 / 169
页数:6
相关论文
共 50 条
  • [21] System and Design Technology Co-optimization of Chiplet-based AI Accelerator with Machine Learning
    Mishty, Kaniz
    Sadi, Mehdi
    [J]. PROCEEDINGS OF THE GREAT LAKES SYMPOSIUM ON VLSI 2023, GLSVLSI 2023, 2023, : 697 - 702
  • [22] Supervised Learning Based Power Management for Multicore Processors
    Jung, Hwisung
    Pedram, Massoud
    [J]. IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2010, 29 (09) : 1395 - 1408
  • [23] Power management in hybrid electric vehicles using deep recurrent reinforcement learning
    Sun, Mengshu
    Zhao, Pu
    Lin, Xue
    [J]. ELECTRICAL ENGINEERING, 2022, 104 (03) : 1459 - 1471
  • [24] Power management in hybrid electric vehicles using deep recurrent reinforcement learning
    Mengshu Sun
    Pu Zhao
    Xue Lin
    [J]. Electrical Engineering, 2022, 104 : 1459 - 1471
  • [25] DeepPower: Deep Reinforcement Learning based Power Management for Latency Critical Applications in Multi-core Systems
    Zhang, Jingrun
    Yu, Guangba
    He, Zilong
    Ai, Liang
    Chen, Pengfei
    [J]. PROCEEDINGS OF THE 52ND INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, ICPP 2023, 2023, : 327 - 336
  • [27] Intelligent Design Method of Thermal Through Silicon via for Thermal Management of Chiplet-Based System
    Wang, Xianglong
    Yang, Yintang
    Chen, Dongdong
    Li, Di
    [J]. IEEE TRANSACTIONS ON ELECTRON DEVICES, 2023, 70 (10) : 5273 - 5280
  • [28] Deep reinforcement learning based solution for sustainable energy management in photovoltaic systems
    Alamro, Hayam
    Alqahtani, Hamed
    Alotaibi, Faiz Abdullah
    Othman, Kamal M.
    Assiri, Mohammed
    Alneil, Amani A.
    Narasimha Prasad, L.V.
    [J]. Optik, 2023, 295
  • [29] Deep Reinforcement Learning-Based Resource Management in Maritime Communication Systems
    Yao, Xi
    Hu, Yingdong
    Xu, Yicheng
    Gao, Ruifeng
    [J]. SENSORS, 2024, 24 (07)
  • [30] Novel Architecture of Energy Management Systems Based on Deep Reinforcement Learning in Microgrid
    Lee, Seongwoo
    Seon, Joonho
    Sun, Young Ghyu
    Kim, Soo Hyun
    Kyeong, Chanuk
    Kim, Dong In
    Kim, Jin Young
    [J]. IEEE TRANSACTIONS ON SMART GRID, 2024, 15 (02) : 1646 - 1658