Size-Aware Cache Management for Compressed Cache Architectures

被引:8
|
作者
Baek, Seungcheol [1 ]
Lee, Hyung Gyu [2 ]
Nicopoulos, Chrysostomos [3 ]
Lee, Junghee [4 ]
Kim, Jongman [1 ]
机构
[1] Georgia Inst Technol, Dept Elect & Comp Engn, Atlanta, GA 30332 USA
[2] Daegu Univ, Sch Comp & Commun Engn, Gyongsan, South Korea
[3] Univ Cyprus, Dept Elect & Comp Engn, Nicosia, Cyprus
[4] Univ Texas San Antonio, Dept Elect & Comp Engn, San Antonio, TX USA
关键词
Cache; compression; data compression; cache compression; cache replacement policy;
D O I
10.1109/TC.2014.2360518
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
A practical way to increase the effective capacity of a microprocessor's cache, without physically increasing the cache size, is to employ data compression. Last-Level Caches (LLC) are particularly amenable to such compression schemes, since the primary purpose of the LLC is to minimize the miss rate, i.e., it directly benefits from a larger logical capacity. In compressed LLCs, the cacheline size varies depending on the achieved compression ratio. Our observations indicate that this size information gives useful hints when managing the cache (e.g., when selecting a victim), which can lead to increased cache performance. However, there are currently no replacement policies tailored to compressed LLCs; existing techniques focus primarily on locality information. This article introduces the concept of size-aware cache management as a way to maximize the performance of compressed caches. Upon analyzing the benefits of considering size information in the management of compressed caches, we propose a novel mechanism-called Effective Capacity Maximizer (ECM)-to further enhance the performance and energy consumption of compressed LLCs. The proposed technique revolves around four fundamental principles: ECM Insertion (ECM-I), ECM Promotion (ECM-P), ECM Eviction Scheduling (ECM-ES), and ECM Replacement (ECM-R). Extensive simulations with memory traces from real applications running on a full-system simulator demonstrate significant improvements compared to compressed cache schemes employing conventional locality-aware cache replacement policies. Specifically, our ECM shows an average effective capacity increase of 18.4 percent over the Least-Recently Used (LRU) policy, and 23.9 percent over the Dynamic Re-Reference Interval Prediction (DRRIP) [1] scheme. This translates into average system performance improvements of 7.2 percent over LRU and 4.2 percent over DRRIP. Moreover, the average energy consumption is also reduced by 5.9 percent over LRU and 3.8 percent over DRRIP.
引用
收藏
页码:2337 / 2352
页数:16
相关论文
共 50 条
  • [1] Adaptive Size-Aware Cache Insertion Policy for Content Delivery Networks
    Wang, Peng
    Liu, Yu
    Zhao, Zhelong
    Zhou, Ke
    Huang, Zhihai
    Chen, Yanxiong
    [J]. 2022 IEEE 40TH INTERNATIONAL CONFERENCE ON COMPUTER DESIGN (ICCD 2022), 2022, : 195 - 202
  • [2] Lightweight Robust Size Aware Cache Management
    Einziger, Gil
    Eytan, Ohad
    Friedman, Roy
    Manes, Benjamin
    [J]. ACM TRANSACTIONS ON STORAGE, 2022, 18 (03)
  • [3] Compressed cache layout aware prefetching
    Charmchi, Niloofar
    Collange, Caroline
    Seznec, Andre
    [J]. 2019 31ST INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE AND HIGH PERFORMANCE COMPUTING (SBAC-PAD 2019), 2019, : 25 - 28
  • [4] Yield-Aware Cache Architectures
    Ozdemir, Serkan
    Sinha, Debjit
    Memik, Gokhan
    Adams, Jonathan
    Zhou, Hai
    [J]. MICRO-39: PROCEEDINGS OF THE 39TH ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE, 2006, : 15 - +
  • [5] Cache Line Aware Algorithm Design for Cache-Coherent Architectures
    Ramos, Sabela
    Hoefler, Torsten
    [J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2016, 27 (10) : 2824 - 2837
  • [6] Power-aware partitioned cache architectures
    Kim, S
    Vijaykrishnan, N
    Kandemir, M
    Sivasubramaniam, A
    Irwin, MJ
    Geethanjali, E
    [J]. ISLPED'01: PROCEEDINGS OF THE 2001 INTERNATIONAL SYMPOSIUM ON LOWPOWER ELECTRONICS AND DESIGN, 2001, : 64 - 67
  • [7] Cache-Aware SPM Allocation Algorithms for Hybrid SPM-Cache Architectures
    Wu, Lan
    Zhang, Wei
    [J]. PROCEEDINGS OF THE SIXTEENTH INTERNATIONAL SYMPOSIUM ON QUALITY ELECTRONIC DESIGN (ISQED 2015), 2015, : 123 - 129
  • [8] Page Size Aware Cache Prefetching
    Vavouliotis, Georgios
    Chacon, Gino
    Alvarez, Lluc
    Gratz, Paul V.
    Jimenez, Daniel A.
    Casas, Marc
    [J]. 2022 55TH ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE (MICRO), 2022, : 956 - 974
  • [9] Cache management for discrete processor architectures
    Tu, JF
    [J]. PARALLEL AND DISTRIBUTED PROCESSING AND APPLICATIONS, 2005, 3758 : 205 - 215
  • [10] Process variation aware cache leakage management
    Meng, Ke
    Joseph, Russ
    [J]. ISLPED '06: PROCEEDINGS OF THE 2006 INTERNATIONAL SYMPOSIUM ON LOW POWER ELECTRONICS AND DESIGN, 2006, : 262 - 267