Outperforming LRU with an adaptive replacement cache algorithm

被引:128
|
作者
Megiddo, N
Modha, DS
机构
[1] IBM Almaden Research Center, San Jose, CA
关键词
D O I
10.1109/MC.2004.1297303
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
C aching, a fundamental metaphor in modern computing, finds wide application in storage systems,(1) databases, Web servers, middleware, processors, file systems, disk drives, redundant array of independent disks controllers, operating systems, and other applications such as data compression and list updating.(2) In a two-level memory hierarchy, a cache performs faster than auxiliary storage, but it is more expensive. Cost concerns thus usually limit cache size to a fraction of the auxiliary memory's size. Both cache and auxiliary memory handle uniformly sized items called pages. Requests for pages go first to the cache. When a page is found in the cache, a hit occurs; otherwise, a cache miss happens, and the request goes to the auxiliary memory. In the latter case, a copy is paged in to the cache. This practice, called demand paging, rules out prefetching pages from the auxiliary memory into the cache. If the cache is full, before the system can page in a new page, it must page out one of the currently cached pages. A replacement policy determines which page is evicted. A commonly used criterion for evaluating a replacement policy is its bit ratio-the frequency with which it finds a page in the cache. Of course, the replacement policy's implementation overhead should not exceed the anticipated time savings. Discarding the least-recently-used page is the policy of choice in cache management. Until recently, attempts to outperform LRU in practice had not succeeded because of overhead issues and the need to pretune parameters. The adaptive replacement cache is a self-tuning, low-overhead algorithm that responds online to changing access patterns. ARC continually balances between the recency and frequency features of the workload, demonstrating that adaptation eliminates the need for the workload-specific pretuning that plagued many previous proposals to improve LRU. ARC's online adaptation will likely have benefits for real-life workloads due to their richness and variability with time. These workloads can contain long sequential I/Os or moving hot spots, changing frequency and scale of temporal locality and fluctuating between stable, repeating access patterns and patterns with transient clustered references. Like LRU, ARC is easy to implement, and its running time per request is essentially independent of the cache size. A real-life implementation revealed that ARC has a low space overhead-0.75 percent of the cache size. Also, unlike LRU, ARC is scanresistant in that it allows one-time sequential requests to pass through without polluting the cache or flushing pages that have temporal locality. Likewise, ARC also effectively handles long periods of low temporal locality. ARC leads to substantial performance gains in terms of an improved hit ratio compared with LRU for a wide range of cache sizes.
引用
收藏
页码:58 / +
页数:9
相关论文
共 50 条
  • [1] SF-LRU cache replacement algorithm
    Alghazo, J
    Akaaboune, A
    Botros, N
    [J]. RECORDS OF THE 2004 IEEE INTERNATIONAL WORKSHOP ON MEMORY TECHNOLOGY, DESIGN AND TESTING, 2004, : 19 - 24
  • [2] On the Analysis of Cache Invalidation With LRU Replacement
    Zheng, Quan
    Yang, Tao
    Kan, Yuanzhi
    Tan, Xiaobin
    Yang, Jian
    Jiang, Xiaofeng
    [J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (03) : 654 - 666
  • [3] Reference Table based Cache design using LRU replacement algorithm for Last Level Cache
    Kumaar, Keishi T.
    Sharma, Anamika
    Bhaskar, M.
    [J]. PROCEEDINGS OF THE 2016 IEEE REGION 10 CONFERENCE (TENCON), 2016, : 2219 - 2223
  • [4] LRU-MRU With Physical Address Cache Replacement Algorithm On FPGA Application
    Xue, Yuan
    Lei, Yongmei
    [J]. 2014 IEEE 17th International Conference on Computational Science and Engineering (CSE), 2014, : 1302 - 1307
  • [5] LRU based small latency first replacement (SLFR) algorithm for the proxy cache
    Shin, SW
    Kim, KY
    Jang, JS
    [J]. IEEE/WIC INTERNATIONAL CONFERENCE ON WEB INTELLIGENCE, PROCEEDINGS, 2003, : 499 - 502
  • [6] TTL approximations of the cache replacement algorithms LRU(m) and h-LRU
    Gast, Nicolas
    Van Houdt, Benny
    [J]. PERFORMANCE EVALUATION, 2017, 117 : 33 - 57
  • [7] S-LRU: A Cache Replacement Algorithm of Video Sharing System for Mobile Devices
    Guo, Jia
    Liu, Chuanchang
    Chen, Junliang
    Sun, Huifeng
    [J]. PROCEEDINGS OF 2012 2ND INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND NETWORK TECHNOLOGY (ICCSNT 2012), 2012, : 1180 - 1184
  • [8] An effective LRU with random replacement policy for cache memory
    Khanfar, K
    [J]. PDPTA'03: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED PROCESSING TECHNIQUES AND APPLICATIONS, VOLS 1-4, 2003, : 1837 - 1843
  • [9] Mobile Database Cache Replacement Policies: LRU and PPRRP
    Chavan, Hariram
    Sane, Suneeta
    [J]. ADVANCES IN COMPUTER SCIENCE AND INFORMATION TECHNOLOGY, PT I, 2011, 131 : 523 - +
  • [10] LRU-based algorithms for Web cache replacement
    Vakali, AI
    [J]. ELECTRONIC COMMERCE AND WEB TECHNOLOGIES, PROCEEDINGS, 2000, 1875 : 409 - 418