Reducing DRAM Cache Access in Cache Miss via an Effective Predictor

被引:0
|
作者
Wang, Qi [1 ,2 ]
Xing, Yanzhen [1 ,2 ]
Wang, Donghui [1 ]
机构
[1] Chinese Acad Sci, Key Lab Informat Technol Autonomous Underwater Ve, Inst Acoust, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 100080, Peoples R China
关键词
DRAM cache; predictor; Cache miss;
D O I
暂无
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
As more and more cores are integrated on a single chip, memory speed has become a major performance bottleneck. The widening latency gap between high speed cores and main memory has led to the evolution of multi-level caches and using DRAM as the Last-Level-Cache (LLC). The main problem of employing DRAM cache is their high tag lookup latency. If DRAM cache misses, the latency of memory access will be increased comparing with the system without DRAM cache. To solve this problem, we propose an effective predictor to Reduce DRAM Cache Access (RCA) in cache miss. The predictor composes of a saturating counter and a Partial MissMap (P_Map). If the saturating counter indicates a hit, then the request will be send to the P_Map to further lookup whether it is a hit or not. The evaluation results show that RCA can improve system performance by 8.2% and 3.4% on average, compared to MissMap and MAP_G, respectively.
引用
下载
收藏
页码:501 / 504
页数:4
相关论文
共 50 条
  • [21] Compiler techniques for reducing data cache miss rate on a multithreaded architecture
    Sarkar, Subhradyuti
    Tullsen, Dean M.
    HIGH PERFORMANCE EMBEDDED ARCHITECTURES AND COMPILERS, 2008, 4917 : 353 - 368
  • [22] Impact of reducing miss write latencies in multiprocessors with two level cache
    Sahuquillo, J
    Pont, A
    24TH EUROMICRO CONFERENCE - PROCEEDING, VOLS 1 AND 2, 1998, : 333 - 336
  • [23] Reducing cache miss penalty using I-FETCH instructions
    Okamoto, S
    Kazuyoshi, T
    HIGH PERFORMANCE COMPUTING SYSTEMS AND APPLICATIONS, 2002, 657 : 177 - 185
  • [24] Building a Low Latency, Highly Associative DRAM Cache with the Buffered Way Predictor
    Wang, Zhe
    Jimenez, Daniel A.
    Zhang, Tao
    Loh, Gabriel H.
    Xie, Yuan
    PROCEEDINGS OF 28TH IEEE INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE AND HIGH PERFORMANCE COMPUTING, (SBAC-PAD 2016), 2016, : 109 - 117
  • [25] DCA: a DRAM-cache-aware DRAM controller
    Huang, Cheng-Chieh
    Nagarajan, Vijay
    Joshi, Arpit
    SC '16: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE FOR HIGH PERFORMANCE COMPUTING, NETWORKING, STORAGE AND ANALYSIS, 2016, : 887 - 897
  • [26] Miss-rate reduction in texture cache by adaptive cache indexing
    Kim, CH
    Im, YH
    Kim, LS
    ELECTRONICS LETTERS, 2004, 40 (10) : 597 - 598
  • [27] A cache replacement policy to reduce cache miss rate for multiprocessor architecture
    Lim, Ho
    Kim, Jaehwan
    Chong, Jong-wha
    IEICE ELECTRONICS EXPRESS, 2010, 7 (12): : 850 - 855
  • [28] Unified DRAM and NVM Hybrid Buffer Cache Architecture for Reducing Journaling Overhead
    Zhang, Zhiyong
    Ju, Lei
    Jia, Zhiping
    PROCEEDINGS OF THE 2016 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE), 2016, : 942 - 947
  • [29] Tuning sizes of arrays: an effective method to reduce Cache miss ratio
    Shanghai Jiaotong Daxue Xuebao, 8 (44-48):
  • [30] ReDRAM: A Reconfigurable DRAM Cache for GPGPUs
    Sahoo, Debiprasanna
    Sha, Swaraj
    Satpathy, Manoranjan
    Mutyam, Madhu
    IEEE COMPUTER ARCHITECTURE LETTERS, 2018, 17 (02) : 213 - 216