Compressed cache layout aware prefetching

被引:1
|
作者
Charmchi, Niloofar [1 ]
Collange, Caroline [1 ]
Seznec, Andre [1 ]
机构
[1] Univ Rennes, INRIA, CNRS, IRISA, Rennes, France
关键词
cache compression; compaction; hardware prefetching;
D O I
10.1109/SBAC-PAD.2019.00017
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The speed gap between CPU and memory is impairing performance. Cache compression and hardware prefetching are two techniques that could confront this bottleneck by decreasing last level cache misses. However, compression and prefetching have positive interactions, as prefetching benefits from higher cache capacity and compression increases the effective cache size. This paper proposes Compressed cache Layout Aware Prefetching (CLAP) to leverage the recently proposed sector-based compressed cache layouts such as SCC or YACC to create a synergy between compressed cache and prefetching. The idea of this approach is to prefetch contiguous blocks that can be compressed and co-allocated together with the requested block on a miss access. Prefetched blocks that share storage with existing blocks do not need to evict a valid existing entry; therefore, CLAP avoids cache pollution. In order to decide the co-allocatable blocks to prefetch, we propose a compression predictor. Based on our experimental evaluations, CLAP reduces the number of cache misses by 12% and improves performance by 4% on average, comparing to a compressed cache.
引用
收藏
页码:25 / 28
页数:4
相关论文
共 50 条
  • [1] Page Size Aware Cache Prefetching
    Vavouliotis, Georgios
    Chacon, Gino
    Alvarez, Lluc
    Gratz, Paul V.
    Jimenez, Daniel A.
    Casas, Marc
    2022 55TH ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE (MICRO), 2022, : 956 - 974
  • [2] VM-aware Adaptive Storage Cache Prefetching
    Matsuzawa, Keiichi
    Shinagawa, Takahiro
    2017 9TH IEEE INTERNATIONAL CONFERENCE ON CLOUD COMPUTING TECHNOLOGY AND SCIENCE (CLOUDCOM), 2017, : 65 - 73
  • [3] Instruction prefetching of systems codes with layout optimized for reduced cache misses
    Xia, C
    Torrellas, J
    23RD ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE, PROCEEDINGS, 1996, : 271 - 282
  • [4] Cache optimizations for iterative numerical codes aware of hardware prefetching
    Weidendorfer, Josef
    Trinitis, Carsten
    APPLIED PARALLEL COMPUTING: STATE OF THE ART IN SCIENTIFIC COMPUTING, 2006, 3732 : 921 - 927
  • [5] Size-Aware Cache Management for Compressed Cache Architectures
    Baek, Seungcheol
    Lee, Hyung Gyu
    Nicopoulos, Chrysostomos
    Lee, Junghee
    Kim, Jongman
    IEEE TRANSACTIONS ON COMPUTERS, 2015, 64 (08) : 2337 - 2352
  • [6] A unified compressed cache hierarchy using Simple Frequent Pattern Compression and partial cache line prefetching
    Tian, Xinhua
    Zhang, Minxuan
    EMBEDDED SOFTWARE AND SYSTEMS, PROCEEDINGS, 2007, 4523 : 142 - +
  • [7] Prefetching-aware cache line turnoff for saving leakage energy
    Kadayif, Ismail
    Kandemir, Mahmut
    Li, Feihui
    ASP-DAC 2006: 11TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE, PROCEEDINGS, 2006, : 182 - 187
  • [8] Robust Cache-Aware Quantum Processor Layout
    LeCompte, Travis
    Qi, Fang
    Peng, Lu
    2020 INTERNATIONAL SYMPOSIUM ON RELIABLE DISTRIBUTED SYSTEMS (SRDS 2020), 2020, : 276 - 287
  • [9] An OLS regression model for context-aware tile prefetching in a web map cache
    Garcia Martin, Ricardo
    de Castro Fernandez, Juan Pablo
    Verdu Perez, Elena
    Verdu Perez, Maria Jesus
    Regueras Santos, Luisa Maria
    INTERNATIONAL JOURNAL OF GEOGRAPHICAL INFORMATION SCIENCE, 2013, 27 (03) : 614 - 632
  • [10] Cache Prefetching in Embedded DSPs
    Vaintraub, Adiel
    Kahn, Roger
    Weiss, Shlomo
    2018 IEEE INTERNATIONAL CONFERENCE ON THE SCIENCE OF ELECTRICAL ENGINEERING IN ISRAEL (ICSEE), 2018,