Compressed cache layout aware prefetching

被引:1
|
作者
Charmchi, Niloofar [1 ]
Collange, Caroline [1 ]
Seznec, Andre [1 ]
机构
[1] Univ Rennes, INRIA, CNRS, IRISA, Rennes, France
关键词
cache compression; compaction; hardware prefetching;
D O I
10.1109/SBAC-PAD.2019.00017
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The speed gap between CPU and memory is impairing performance. Cache compression and hardware prefetching are two techniques that could confront this bottleneck by decreasing last level cache misses. However, compression and prefetching have positive interactions, as prefetching benefits from higher cache capacity and compression increases the effective cache size. This paper proposes Compressed cache Layout Aware Prefetching (CLAP) to leverage the recently proposed sector-based compressed cache layouts such as SCC or YACC to create a synergy between compressed cache and prefetching. The idea of this approach is to prefetch contiguous blocks that can be compressed and co-allocated together with the requested block on a miss access. Prefetched blocks that share storage with existing blocks do not need to evict a valid existing entry; therefore, CLAP avoids cache pollution. In order to decide the co-allocatable blocks to prefetch, we propose a compression predictor. Based on our experimental evaluations, CLAP reduces the number of cache misses by 12% and improves performance by 4% on average, comparing to a compressed cache.
引用
收藏
页码:25 / 28
页数:4
相关论文
共 50 条
  • [31] COMPILERS NEW ROLE IN DATA CACHE PREFETCHING
    CHI, CH
    INFORMATION PROCESSING '94, VOL I: TECHNOLOGY AND FOUNDATIONS, 1994, 51 : 189 - 194
  • [32] Combative Cache Efficacy Techniques: Cache Replacement in the Context of Independent Prefetching in Last Level Cache
    Gomes, Cesar
    Hempstead, Mark
    2015 33RD IEEE INTERNATIONAL CONFERENCE ON COMPUTER DESIGN (ICCD), 2015, : 423 - 426
  • [33] Hybrid-Comp: A Criticality-Aware Compressed Last-Level Cache
    Jadidi, Amin
    Arjomand, Mohammad
    Kandemir, Mahmut T.
    Das, Chita R.
    2018 19TH INTERNATIONAL SYMPOSIUM ON QUALITY ELECTRONIC DESIGN (ISQED), 2018, : 25 - 30
  • [34] BERT4Cache: a bidirectional encoder representations for data prefetching in cache
    Shang, Jing
    Wu, Zhihui
    Xiao, Zhiwen
    Zhang, Yifei
    Wang, Jibin
    PEERJ COMPUTER SCIENCE, 2024, 10
  • [35] BERT4Cache: a bidirectional encoder representations for data prefetching in cache
    Shang, Jing
    Wu, Zhihui
    Xiao, Zhiwen
    Zhang, Yifei
    Wang, Jibin
    PeerJ Computer Science, 2024, 10 : 1 - 21
  • [36] Graph4Cache: A Graph Neural Network Model for Cache Prefetching
    Shang, Jing
    Wu, Zhihui
    Xiao, Zhiwen
    Zhang, Yifei
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2024, 61 (08): : 1945 - 1956
  • [37] Iteration Aware Prefetching For Unstructured Grids
    Akande, Oyindamola O.
    Rhodes, Philip J.
    2013 IEEE INTERNATIONAL CONFERENCE ON BIG DATA, 2013,
  • [38] Processor aware anticipatory prefetching in loops
    Kalogeropulos, S
    Rajagopalan, M
    Rao, V
    Song, YH
    Tirumalai, P
    10TH INTERNATIONAL SYMPOSIUM ON HIGH PERFORMANCE COMPUTER ARCHITECTURE, PROCEEDINGS, 2004, : 106 - 115
  • [39] Broadcast based cache invalidation and prefetching in mobile environment
    Chand, N
    Joshi, R
    Misra, M
    HIGH PERFORMANCE COMPUTING - HIPC 2004, 2004, 3296 : 410 - 419
  • [40] A miss history-based architecture for cache prefetching
    Phalke, V
    Gopinath, B
    MEMORY MANAGEMENT, 1995, 986 : 381 - 398