Energy-Efficient Cache Partitioning Using Machine Learning for Embedded Systems

被引:0
|
作者
Nour, Samar [1 ]
Habashy, Shahira M. [1 ]
Salem, Sameh A. [1 ,2 ]
机构
[1] Helwan Univ, Dept Comp & Syst Engn, Fac Engn, Cairo, Egypt
[2] Natl Telecom Regulatory Author, Egyptian Comp Emergency Readiness Team, Cairo, Egypt
来源
关键词
Energy optimization; Multicore embedded systems; Last level cache; Cache partitioning; Machine learning; AWARE;
D O I
10.5455/jjee.204-1669909560
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Nowadays, embedded device applications have become partially correlated and can share platform resources. Cross-execution and sharing resources can cause memory access conflicts, especially in the Last Level Cache (LLC). LLC is a promising candidate for improving system performance on multicore embedded systems. It leads to a reduction in the number of high-latency main memory accesses. Currently, commercial devices can use cache partitioning. The software could better utilize the LLC and conserve energy by caching. This paper proposes a new energy-optimization model for embedded multicore systems based on a reconfigurable artificial neural network LLC architecture. The proposed model uses a machine-learning approach to express the reconfiguration of LLC, and can predict each task's next interval LLC partitioning factor at runtime. The obtained experimental results reveal that the proposed model - compared to other algorithms - improves energy consumption by 28%, and gives 33% reduction in the LLC miss rate.
引用
收藏
页码:285 / 300
页数:16
相关论文
共 50 条
  • [1] A Machine Learning Approach for a Scalable, Energy-Efficient Utility-Based Cache Partitioning
    Guney, Isa Ahmet
    Yildiz, Abdullah
    Bayindir, Ismail Ugur
    Serdaroglu, Kemal Cagri
    Bayik, Utku
    Kucuk, Gurhan
    [J]. HIGH PERFORMANCE COMPUTING, ISC HIGH PERFORMANCE 2015, 2015, 9137 : 409 - 421
  • [2] Cache Partitioning for Energy-Efficient and Interference-Free Embedded Multitasking
    Reddy, Rakesh
    Petrov, Peter
    [J]. ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2010, 9 (03)
  • [3] Dynamically Adaptive I-Cache Partitioning for Energy-Efficient Embedded Multitasking
    Paul, Mathew
    Petrov, Peter
    [J]. IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2011, 19 (11) : 2067 - 2080
  • [4] Energy-Efficient Cache Partitioning For Future CMPs
    Sundararajan, Karthik T.
    Jones, Timothy M.
    Topham, Nigel P.
    [J]. PROCEEDINGS OF THE 21ST INTERNATIONAL CONFERENCE ON PARALLEL ARCHITECTURES AND COMPILATION TECHNIQUES (PACT'12), 2012, : 465 - 466
  • [5] Energy-Efficient Reconfigurable Cache Architectures for Accelerator-Enabled Embedded Systems
    Farmahini-Farahani, Amin
    Kim, Nam Sung
    Morrow, Katherine
    [J]. 2014 IEEE INTERNATIONAL SYMPOSIUM ON PERFORMANCE ANALYSIS OF SYSTEMS AND SOFTWARE (ISPASS), 2014, : 211 - 220
  • [6] Energy-Efficient Trace Reuse Cache for Embedded Processors
    Tsai, Yi-Ying
    Chen, Chung-Ho
    [J]. IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2011, 19 (09) : 1681 - 1694
  • [7] Cooperative Partitioning: Energy-Efficient Cache Partitioning for High-Performance CMPs
    Sundararajan, Karthik T.
    Porpodas, Vasileios
    Jones, Timothy M.
    Topham, Nigel P.
    Franke, Bjoern
    [J]. 2012 IEEE 18TH INTERNATIONAL SYMPOSIUM ON HIGH PERFORMANCE COMPUTER ARCHITECTURE (HPCA), 2012, : 311 - 322
  • [8] Energy-Efficient Resource Allocation with Dynamic Cache Using Reinforcement Learning
    Hu, Zeyu
    Li, Zexu
    Li, Yong
    [J]. 2019 IEEE GLOBECOM WORKSHOPS (GC WKSHPS), 2019,
  • [9] An energy-efficient partitioned instruction cache architecture for embedded processors
    Kim, CH
    Chung, SW
    Jhon, CS
    [J]. IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2006, E89D (04): : 1450 - 1458
  • [10] Energy-Efficient Machine Learning on the Edges
    Kumar, Mohit
    Zhang, Xingzhou
    Liu, Liangkai
    Wang, Yifan
    Shi, Weisong
    [J]. 2020 IEEE 34TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS (IPDPSW 2020), 2020, : 912 - 921