Buffer Filter: A Last-level Cache Management Policy for CPU-GPGPU Heterogeneous System

被引:5
|
作者
Li, Songyuan [2 ]
Meng, Jinglei [2 ]
Yu, Licheng [2 ]
Ma, Jianliang [2 ]
Chen, Tianzhou [2 ]
Wu, Minghui [1 ]
机构
[1] Zhejiang Univ City Coll, Dept Comp Sci & Engn, Hangzhou, Zhejiang, Peoples R China
[2] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou, Zhejiang, Peoples R China
关键词
shared last-level cache; multicore; heterogeneous system; HIGH-PERFORMANCE; REPLACEMENT;
D O I
10.1109/HPCC-CSS-ICESS.2015.290
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
There is a growing trend towards heterogeneous systems, which contain CPUs and GPGPUs in a single chip. Managing those various on-chip resources shared between CPUs and GPGPUs, however, is a big issue and the last-level cache (LLC) is one of the most critical resources due to its impact on system performance. Some well-known cache replacement policies like LRU and DRRIP, designed for a CPU, can not be so well qualified for heterogeneous systems because the LLC will be dominated by memory accesses from thousands of threads of GPGPU applications and this may lead to significant performance downgrade for a CPU. Another reason is that a GPGPU is able to tolerate memory latency when quantity of active threads in the GPGPU is sufficient, but those policies do not utilize this feature. In this paper we propose a novel shared LLC management policy for CPU-GPGPU heterogeneous systems called Buffer Filter which takes advantage of memory latency tolerance of GPGPUs. This policy has the ability to restrict streaming requests of GPGPU by adding a buffer to memory system and vacate LLC space for cache-sensitive CPU applications. Although there is some IPC loss for GPGPU but the memory latency tolerance ensures the basic performance of GPGPU's applications. The experiments show that the Buffer Filter is able to filtrate up to 50% to 75% of the total GPGPU streaming requests at the cost of little GPGPU IPC decrease and improve the hit rate of CPU applications by 2x to 7x.
引用
收藏
页码:266 / 271
页数:6
相关论文
共 34 条
  • [31] Dynamic Adaptive Replacement Policy in Shared Last-Level Cache of DRAM/PCM Hybrid Memory for Big Data Storage
    Jia, Gangyong
    Han, Guangjie
    Jiang, Jinfang
    Liu, Li
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2017, 13 (04) : 1951 - 1960
  • [32] SRM-Buffer: An OS Buffer Management Technique to Prevent Last Level Cache from Thrashing in Multicores
    Ding, Xiaoning
    Wang, Kaibo
    Zhang, Xiaodong
    EUROSYS 11: PROCEEDINGS OF THE EUROSYS 2011 CONFERENCE, 2011, : 243 - 256
  • [33] Cache Friendliness-Aware Management of Shared Last-Level Caches for High Performance Multi-Core Systems
    Kaseridis, Dimitris
    Iqbal, Muhammad Faisal
    John, Lizy Kurian
    IEEE TRANSACTIONS ON COMPUTERS, 2014, 63 (04) : 874 - 887
  • [34] ReMAP: Reuse and Memory Access Cost Aware Eviction Policy for Last Level Cache Management
    Arunkumar, Akhil
    Wu, Carole-Jean
    2014 32ND IEEE INTERNATIONAL CONFERENCE ON COMPUTER DESIGN (ICCD), 2014, : 110 - 117