Outperforming LRU with an adaptive replacement cache algorithm

被引:128
|
作者
Megiddo, N
Modha, DS
机构
[1] IBM Almaden Research Center, San Jose, CA
关键词
D O I
10.1109/MC.2004.1297303
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
C aching, a fundamental metaphor in modern computing, finds wide application in storage systems,(1) databases, Web servers, middleware, processors, file systems, disk drives, redundant array of independent disks controllers, operating systems, and other applications such as data compression and list updating.(2) In a two-level memory hierarchy, a cache performs faster than auxiliary storage, but it is more expensive. Cost concerns thus usually limit cache size to a fraction of the auxiliary memory's size. Both cache and auxiliary memory handle uniformly sized items called pages. Requests for pages go first to the cache. When a page is found in the cache, a hit occurs; otherwise, a cache miss happens, and the request goes to the auxiliary memory. In the latter case, a copy is paged in to the cache. This practice, called demand paging, rules out prefetching pages from the auxiliary memory into the cache. If the cache is full, before the system can page in a new page, it must page out one of the currently cached pages. A replacement policy determines which page is evicted. A commonly used criterion for evaluating a replacement policy is its bit ratio-the frequency with which it finds a page in the cache. Of course, the replacement policy's implementation overhead should not exceed the anticipated time savings. Discarding the least-recently-used page is the policy of choice in cache management. Until recently, attempts to outperform LRU in practice had not succeeded because of overhead issues and the need to pretune parameters. The adaptive replacement cache is a self-tuning, low-overhead algorithm that responds online to changing access patterns. ARC continually balances between the recency and frequency features of the workload, demonstrating that adaptation eliminates the need for the workload-specific pretuning that plagued many previous proposals to improve LRU. ARC's online adaptation will likely have benefits for real-life workloads due to their richness and variability with time. These workloads can contain long sequential I/Os or moving hot spots, changing frequency and scale of temporal locality and fluctuating between stable, repeating access patterns and patterns with transient clustered references. Like LRU, ARC is easy to implement, and its running time per request is essentially independent of the cache size. A real-life implementation revealed that ARC has a low space overhead-0.75 percent of the cache size. Also, unlike LRU, ARC is scanresistant in that it allows one-time sequential requests to pass through without polluting the cache or flushing pages that have temporal locality. Likewise, ARC also effectively handles long periods of low temporal locality. ARC leads to substantial performance gains in terms of an improved hit ratio compared with LRU for a wide range of cache sizes.
引用
收藏
页码:58 / +
页数:9
相关论文
共 50 条
  • [41] An improved instruction cache replacement algorithm
    Kleen, A
    Stienberg, E
    Anschel, M
    Sibony, Y
    Greenberg, S
    [J]. 2005 IEEE WORKSHOP ON SIGNAL PROCESSING SYSTEMS - DESIGN AND IMPLEMENTATION (SIPS), 2005, : 573 - 578
  • [42] Window-LRFU: a cache replacement policy subsumes the LRU and window-LFU policies
    Bai, Sen
    Bai, Xin
    Che, Xiangjiu
    [J]. CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2016, 28 (09): : 2670 - 2684
  • [43] A Cache Model of the Block Correlations Directed Cache Replacement Algorithm
    Zhu Xudong
    [J]. APPLIED MATHEMATICS & INFORMATION SCIENCES, 2011, 5 (02): : 79 - 88
  • [44] CRFP: A Novel Adaptive Replacement Policy Combined the LRU and LFU Policies
    Li Zhan-sheng
    Liu Da-wei
    Bi Hui-juan
    [J]. 8TH IEEE INTERNATIONAL CONFERENCE ON COMPUTER AND INFORMATION TECHNOLOGY WORKSHOPS: CIT WORKSHOPS 2008, PROCEEDINGS, 2008, : 72 - +
  • [45] A Modified PSO Algorithm Based On Cache Replacement Algorithm
    Feng, Mingyue
    Pan, Hua
    [J]. 2014 TENTH INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND SECURITY (CIS), 2014, : 558 - 562
  • [46] LRU-SP: A size-adjusted and popularity-aware LRU replacement algorithm for web caching
    Cheng, K
    Kambayashi, Y
    [J]. 24TH ANNUAL INTERNATIONAL COMPUTER SOFTWARE AND APPLICATIONS CONFERENCE (COSPSAC 2000), 2000, 24 : 48 - 53
  • [47] Adaptive replacement policy for hybrid cache architecture
    Choi, Ju-Hee
    Park, Gi-Ho
    [J]. IEICE ELECTRONICS EXPRESS, 2014, 11 (22):
  • [48] CCF-LRU: A New Buffer Replacement Algorithm for Flash Memory
    Li, Zhi
    Jin, Peiquan
    Su, Xuan
    Cui, Kai
    Yue, Lihua
    [J]. IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2009, 55 (03) : 1351 - 1359
  • [49] SARC: Sequential prefetching in adaptive replacement cache
    Gill, BS
    Modha, DS
    [J]. USENIX ASSOCIATION PROCEEDINGS OF THE GENERAL TRACK: 2005 UNENIX ANNUAL TECHNICAL CONFERENCE, 2005, : 293 - 308
  • [50] FS-LRU: A Page Cache Algorithm for Eliminating fsync Write on Mobile Devices
    Kang, Dong Hyun
    Eom, Young Ik
    [J]. 2016 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS (ICCE), 2016,