Size-Aware Cache Management for Compressed Cache Architectures

被引:8
|
作者
Baek, Seungcheol [1 ]
Lee, Hyung Gyu [2 ]
Nicopoulos, Chrysostomos [3 ]
Lee, Junghee [4 ]
Kim, Jongman [1 ]
机构
[1] Georgia Inst Technol, Dept Elect & Comp Engn, Atlanta, GA 30332 USA
[2] Daegu Univ, Sch Comp & Commun Engn, Gyongsan, South Korea
[3] Univ Cyprus, Dept Elect & Comp Engn, Nicosia, Cyprus
[4] Univ Texas San Antonio, Dept Elect & Comp Engn, San Antonio, TX USA
关键词
Cache; compression; data compression; cache compression; cache replacement policy;
D O I
10.1109/TC.2014.2360518
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
A practical way to increase the effective capacity of a microprocessor's cache, without physically increasing the cache size, is to employ data compression. Last-Level Caches (LLC) are particularly amenable to such compression schemes, since the primary purpose of the LLC is to minimize the miss rate, i.e., it directly benefits from a larger logical capacity. In compressed LLCs, the cacheline size varies depending on the achieved compression ratio. Our observations indicate that this size information gives useful hints when managing the cache (e.g., when selecting a victim), which can lead to increased cache performance. However, there are currently no replacement policies tailored to compressed LLCs; existing techniques focus primarily on locality information. This article introduces the concept of size-aware cache management as a way to maximize the performance of compressed caches. Upon analyzing the benefits of considering size information in the management of compressed caches, we propose a novel mechanism-called Effective Capacity Maximizer (ECM)-to further enhance the performance and energy consumption of compressed LLCs. The proposed technique revolves around four fundamental principles: ECM Insertion (ECM-I), ECM Promotion (ECM-P), ECM Eviction Scheduling (ECM-ES), and ECM Replacement (ECM-R). Extensive simulations with memory traces from real applications running on a full-system simulator demonstrate significant improvements compared to compressed cache schemes employing conventional locality-aware cache replacement policies. Specifically, our ECM shows an average effective capacity increase of 18.4 percent over the Least-Recently Used (LRU) policy, and 23.9 percent over the Dynamic Re-Reference Interval Prediction (DRRIP) [1] scheme. This translates into average system performance improvements of 7.2 percent over LRU and 4.2 percent over DRRIP. Moreover, the average energy consumption is also reduced by 5.9 percent over LRU and 3.8 percent over DRRIP.
引用
收藏
页码:2337 / 2352
页数:16
相关论文
共 50 条
  • [41] LCRC: A Dependency-aware Cache Management Policy for Spark
    Wang, Bo
    Tang, Jie
    Zhang, Rui
    Ding, Wei
    Qi, Deyu
    [J]. 2018 IEEE INT CONF ON PARALLEL & DISTRIBUTED PROCESSING WITH APPLICATIONS, UBIQUITOUS COMPUTING & COMMUNICATIONS, BIG DATA & CLOUD COMPUTING, SOCIAL COMPUTING & NETWORKING, SUSTAINABLE COMPUTING & COMMUNICATIONS, 2018, : 956 - 963
  • [42] Spatial Locality-Aware Cache Partitioning for Effective Cache Sharing
    Gupta, Saurabh
    Zhou, Huiyang
    [J]. 2015 44TH INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING (ICPP), 2015, : 150 - 159
  • [43] Replication-aware Cache Management for CMPs with Private LLCs
    Yuan, Fengkai
    Ji, Zhenzhou
    [J]. 2016 2ND IEEE INTERNATIONAL CONFERENCE ON COMPUTER AND COMMUNICATIONS (ICCC), 2016, : 2829 - 2834
  • [44] Cache Reuse Aware Replacement Policy for Improving GPU Cache Performance
    Son, Dong Oh
    Kim, Gwang Bok
    Kim, Jong Myon
    Kim, Cheol Hong
    [J]. IT CONVERGENCE AND SECURITY 2017, VOL 2, 2018, 450 : 127 - 133
  • [45] Thermal- and Cache-Aware Resource Management based on ML-Driven Cache Contention Prediction
    Sikal, Mohammed Bakr
    Khdr, Heba
    Rapp, Martin
    Henkel, Joerg
    [J]. PROCEEDINGS OF THE 2022 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2022), 2022, : 1383 - 1388
  • [46] ACAM: Application Aware Adaptive Cache Management for Shared LLC
    Mahto, Sujit Kr
    Newton
    [J]. VLSI DESIGN AND TEST, 2017, 711 : 324 - 336
  • [47] Process Variation-Aware Adaptive Cache Architecture and Management
    Mutyam, Madhu
    Wang, Feng
    Krishnan, Ramakrishnan
    Narayanan, Vijaykrishnan
    Kandemir, Mahmut
    Xie, Yuan
    Irwin, Mary Jane
    [J]. IEEE TRANSACTIONS ON COMPUTERS, 2009, 58 (07) : 865 - 877
  • [48] Morphable cache architectures: Potential benefits
    Kadayif, I
    Kandemir, M
    Vijaykrishnan, N
    Irwin, MJ
    Ramanujam, J
    [J]. ACM SIGPLAN NOTICES, 2001, 36 (08) : 128 - 137
  • [49] A Cache Tuning Heuristic for Multicore Architectures
    Rawlins, Marisha
    Gordon-Ross, Ann
    [J]. IEEE TRANSACTIONS ON COMPUTERS, 2013, 62 (08) : 1570 - 1583
  • [50] NEAR-OPTIMAL CACHE BLOCK PLACEMENT WITH REACTIVE NONUNIFORM CACHE ARCHITECTURES
    Hardavellas, Nikos
    Ferdman, Michael
    Falsafi, Babak
    Ailamaki, Anastasia
    [J]. IEEE MICRO, 2010, 30 (01) : 20 - 28