Filter cache: filtering useless cache blocks for a small but efficient shared last-level cache

被引:0
|
作者
Han Jun Bae
Lynn Choi
机构
[1] Korea University,School of Electrical Engineering
来源
关键词
Shared last-level cache; Reuse rate; Temporal reuse; Spatial reuse; Multicore CPU; Cache organization;
D O I
暂无
中图分类号
学科分类号
摘要
Although the shared last-level cache (SLLC) occupies a significant portion of multicore CPU chip die area, more than 59% of SLLC cache blocks are not reused during their lifetime. If we can filter out these useless blocks from SLLC, we can effectively reduce the size of SLLC without sacrificing performance. For this purpose, we classify the reuse of cache blocks into temporal and spatial reuse and further analyze the reuse by using reuse interval and reuse count. From our experimentation, we found that most of spatially reused cache blocks are reused only once with short reuse interval, so it is inefficient to manage them in SLLC. In this paper, we propose a new small additional cache called Filter Cache to the SLLC, which cannot only check the temporal reuse but also can prevent spatially reused blocks from entering the SLLC. Thus, we do not maintain data for non-reused blocks and spatially reused blocks in the SLLC, dramatically reducing the size of the SLLC. Through our detailed simulation on PARSEC benchmarks, we show that our new SLLC design with Filter Cache exhibits comparable performance to the conventional SLLC with only 24.21% of SLLC area across a variety of different workloads. This is achieved by its faster access and high reuse rates in the small SLLC with Filter Cache.
引用
收藏
页码:7521 / 7544
页数:23
相关论文
共 50 条
  • [1] Filter cache: filtering useless cache blocks for a small but efficient shared last-level cache
    Bae, Han Jun
    Choi, Lynn
    [J]. JOURNAL OF SUPERCOMPUTING, 2020, 76 (10): : 7521 - 7544
  • [2] Last-level Cache Deduplication
    Tian, Yingying
    Khan, Samira M.
    Jimenez, Daniel A.
    Loh, Gabriel H.
    [J]. PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON SUPERCOMPUTING, (ICS'14), 2014, : 53 - 62
  • [3] Reuse locality aware cache partitioning for last-level cache
    Shen, Fanfan
    He, Yanxiang
    Zhang, Jun
    Li, Qingan
    Li, Jianhua
    Xu, Chao
    [J]. COMPUTERS & ELECTRICAL ENGINEERING, 2019, 74 : 319 - 330
  • [4] Dynamically Reconfigurable Hybrid Cache: An Energy-Efficient Last-Level Cache Design
    Chen, Yu-Ting
    Cong, Jason
    Huang, Hui
    Liu, Bin
    Liu, Chunyue
    Potkonjak, Miodrag
    Reinman, Glenn
    [J]. DESIGN, AUTOMATION & TEST IN EUROPE (DATE 2012), 2012, : 45 - 50
  • [5] Managing Shared Last-Level Cache in a Heterogeneous Multicore Processor
    Mekkat, Vineeth
    Holey, Anup
    Yew, Pen-Chung
    Zhai, Antonia
    [J]. 2013 22ND INTERNATIONAL CONFERENCE ON PARALLEL ARCHITECTURES AND COMPILATION TECHNIQUES (PACT), 2013, : 225 - 234
  • [6] Reducing Contention in Shared Last-Level Cache for Throughput Processors
    Kuo, Hsien-Kai
    Lai, Bo-Cheng Charles
    Jou, Jing-Yang
    [J]. ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS, 2014, 20 (01) : 1 - 28
  • [7] A Pragmatic Delineation on Cache Bypass Algorithm in Last-Level Cache (LLC)
    Dash, Banchhanidhi
    Swain, Debabala
    Swain, Debabrata
    [J]. COMPUTATIONAL INTELLIGENCE IN DATA MINING, CIDM, VOL 2, 2016, 411 : 37 - 45
  • [8] Cost aware cache replacement policy in shared last-level cache for hybrid memory based fog computing
    Jia, Gangyong
    Han, Guangjie
    Wang, Hao
    Wang, Feng
    [J]. ENTERPRISE INFORMATION SYSTEMS, 2018, 12 (04) : 435 - 451
  • [9] Shared Last-level Cache Management for GPGPUs with Hybrid Main Memory
    Wang, Guan
    Cai, Xiaojun
    Ju, Lei
    Zang, Chuanqi
    Zhao, Mengying
    Jia, Zhiping
    [J]. PROCEEDINGS OF THE 2017 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE), 2017, : 25 - 30
  • [10] Rowhammer Cache: A Last-level Cache for Low-Overhead Rowhammer Tracking
    Singh, Aman
    Panda, Biswabandan
    [J]. 2024 IEEE INTERNATIONAL SYMPOSIUM ON HARDWARE ORIENTED SECURITY AND TRUST, HOST, 2024, : 349 - 360