Reducing DRAM Cache Access in Cache Miss via an Effective Predictor

被引:0
|
作者
Wang, Qi [1 ,2 ]
Xing, Yanzhen [1 ,2 ]
Wang, Donghui [1 ]
机构
[1] Chinese Acad Sci, Key Lab Informat Technol Autonomous Underwater Ve, Inst Acoust, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 100080, Peoples R China
关键词
DRAM cache; predictor; Cache miss;
D O I
暂无
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
As more and more cores are integrated on a single chip, memory speed has become a major performance bottleneck. The widening latency gap between high speed cores and main memory has led to the evolution of multi-level caches and using DRAM as the Last-Level-Cache (LLC). The main problem of employing DRAM cache is their high tag lookup latency. If DRAM cache misses, the latency of memory access will be increased comparing with the system without DRAM cache. To solve this problem, we propose an effective predictor to Reduce DRAM Cache Access (RCA) in cache miss. The predictor composes of a saturating counter and a Partial MissMap (P_Map). If the saturating counter indicates a hit, then the request will be send to the P_Map to further lookup whether it is a hit or not. The evaluation results show that RCA can improve system performance by 8.2% and 3.4% on average, compared to MissMap and MAP_G, respectively.
引用
收藏
页码:501 / 504
页数:4
相关论文
共 50 条
  • [1] ATCache: Reducing DRAM Cache Latency via a Small SRAM Tag Cache
    Huang, Cheng-Chieh
    Nagarajan, Vijay
    [J]. PROCEEDINGS OF THE 23RD INTERNATIONAL CONFERENCE ON PARALLEL ARCHITECTURES AND COMPILATION TECHNIQUES (PACT'14), 2014, : 51 - 60
  • [2] Reducing cache miss ratio for routing prefix cache
    Liu, H
    [J]. GLOBECOM'02: IEEE GLOBAL TELECOMMUNICATIONS CONFERENCE, VOLS 1-3, CONFERENCE RECORDS: THE WORLD CONVERGES, 2002, : 2323 - 2327
  • [3] Reducing Latency in an SRAM/DRAM Cache Hierarchy via a Novel Tag-Cache Architecture
    Hameed, Fazal
    Bauer, Lars
    Henkel, Joerg
    [J]. 2014 51ST ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2014,
  • [4] Small Cache Lookaside Table for Fast DRAM Cache Access
    Tao, Xi
    Zeng, Qi
    Peir, Jih-Kwon
    Lu, Shih-Lien
    [J]. 2016 IEEE 35TH INTERNATIONAL PERFORMANCE COMPUTING AND COMMUNICATIONS CONFERENCE (IPCCC), 2016,
  • [5] Unison Cache: A Scalable and Effective Die-Stacked DRAM Cache
    Jevdjic, Djordje
    Loh, Gabriel H.
    Kaynak, Cansu
    Falsafi, Babak
    [J]. 2014 47TH ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE (MICRO), 2014, : 25 - 37
  • [6] Saber: Sequential Access Based cachE Replacement to Reduce the Cache Miss Penalty
    Zhao, Yingjie
    Xiao, Nong
    [J]. PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE FOR YOUNG COMPUTER SCIENTISTS, VOLS 1-5, 2008, : 1389 - 1394
  • [7] THE CACHE DRAM ARCHITECTURE - A DRAM WITH AN ON-CHIP CACHE MEMORY
    HIDAKA, H
    MATSUDA, Y
    ASAKURA, M
    FUJISHIMA, K
    [J]. IEEE MICRO, 1990, 10 (02) : 14 - 25
  • [8] Cache What You Need to Cache: Reducing Write Traffic in Cloud Cache via "One-Time-Access-Exclusion" Policy
    Wang, Hua
    Zhang, Jiawei
    Huang, Ping
    Yi, Xinbo
    Cheng, Bin
    Zhou, Ke
    [J]. ACM TRANSACTIONS ON STORAGE, 2020, 16 (03)
  • [9] Work-in-Progress: DRAM Cache Access Optimization leveraging Line Locking in Tag Cache
    Tripathy, Shivani
    Sahoo, Debiprasanna
    Satpathy, Manoranjan
    [J]. 2018 INTERNATIONAL CONFERENCE ON COMPILERS, ARCHITECTURES AND SYNTHESIS FOR EMBEDDED SYSTEMS (CASES), 2018,
  • [10] A memory bandwidth effective cache store miss policy
    Rui, H
    Zhang, FX
    Hu, WW
    [J]. ADVANCES IN COMPUTER SYSTEMS ARCHITECTURE, PROCEEDINGS, 2005, 3740 : 750 - 760