An Efficient GPU Cache Architecture for Applications with Irregular Memory Access Patterns

被引:10
|
作者
Li, Bingchao [1 ]
Wei, Jizeng [2 ]
Sun, Jizhou [2 ]
Annavaram, Murali [3 ]
Kim, Nam Sung [4 ,5 ]
机构
[1] Civil Aviat Univ China, Sch Comp Sci & Technol, 2898 Jinbei Rd, Tianjin 300300, Peoples R China
[2] Tianjin Univ, Coll Intelligence & Comp, 135 Yaguan Rd,Haihe Educ Pk, Tianjin 300350, Peoples R China
[3] Univ Southern Calif, Dept Elect & Comp Engn, Hughes Aircraft Elect Engn Ctr, 3740 McClintock Ave, Los Angeles, CA 90089 USA
[4] Univ Illinois, Urbana, IL USA
[5] Coordinated Sci Lab, Dept Elect & Comp Engn, 1308 West Main St, Urbana, IL 61801 USA
基金
美国国家科学基金会; 中国国家自然科学基金;
关键词
GPU; cache; shared memory; thread;
D O I
10.1145/3322127
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
GPUs provide high-bandwidth/low-latency on-chip shared memory and L1 cache to efficiently service a large number of concurrent memory requests. Specifically, concurrent memory requests accessing contiguous memory space are coalesced into warp-wide accesses. To support such large accesses to L1 cache with low latency, the size of L1 cache line is no smaller than that of warp-wide accesses. However, such L1 cache architecture cannot always be efficiently utilized when applications generate many memory requests with irregular access patterns especially due to branch and memory divergences that make requests uncoalesced and small. Furthermore, unlike L1 cache, the shared memory of GPUs is not often used in many applications, which essentially depends on programmers. In this article, we propose Elastic-Cache, which can efficiently support both fine- and coarse-grained L1 cache line management for applications with both regular and irregular memory access patterns to improve the L1 cache efficiency. Specifically, it can store 32- or 64-byte words in non-contiguous memory space to a single 128-byte cache line. Furthermore, it neither requires an extra memory structure nor reduces the capacity of L1 cache for tag storage, since it stores auxiliary tags for fine-grained L1 cache line managements in the shared memory space that is not fully used in many applications. To improve the bandwidth utilization of L1 cache with Elastic-Cache for fine-grained accesses, we further propose Elastic-Plus to issue 32-byte memory requests in parallel, which can reduce the processing latency of memory instructions and improve the throughput of GPUs. Our experiment result shows that Elastic-Cache improves the geometric-mean performance of applications with irregular memory access patterns by 104% without degrading the performance of applications with regular memory access patterns. Elastic-Plus outperforms Elastic-Cache and improves the performance of applications with irregular memory access patterns by 131%.
引用
收藏
页码:1 / 24
页数:24
相关论文
共 50 条
  • [41] Multicopy Cache: A Highly Energy-Efficient Cache Architecture
    Chakraborty, Arup
    Homayoun, Houman
    Khajeh, Amin
    Dutt, Nikil
    Eltawil, Ahmed
    Kurdahi, Fadi
    ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2014, 13
  • [42] A Dynamic Cache Architecture for Efficient Memory Resource Allocation in Many-Core Systems
    Tradowsky, Carsten
    Cordero, Enrique
    Orsinger, Christoph
    Vesper, Malte
    Becker, Jurgen
    APPLIED RECONFIGURABLE COMPUTING, ARC 2016, 2016, : 343 - 351
  • [43] ScaleGPU: GPU Architecture for Memory-Unaware GPU Programming
    Kim, Youngsok
    Lee, Jaewon
    Kim, Donggyu
    Kim, Jangwoo
    IEEE COMPUTER ARCHITECTURE LETTERS, 2014, 13 (02) : 101 - 104
  • [44] Scramble Cache: An Efficient Cache Architecture for Randomized Set Permutation
    Jaamoum, Amine
    Hiscock, Thomas
    Di Natale, Giorgio
    PROCEEDINGS OF THE 2021 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2021), 2021, : 621 - 626
  • [45] Dual-addressing Memory Architecture for Two-dimensional Memory Access Patterns
    Chen, Yen-Hao
    Liu, Yi-Yu
    DESIGN, AUTOMATION & TEST IN EUROPE, 2013, : 71 - 76
  • [46] DaCache: Memory Divergence-Aware GPU Cache Management
    Wang, Bin
    Yu, Weikuan
    Sun, Xian-He
    Wang, Xinning
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON SUPERCOMPUTING (ICS'15), 2015, : 89 - 98
  • [47] Understanding and Optimizing GPU Cache Memory Performance for Compute Workloads
    Choo, Kyoshin
    Panlener, William
    Jang, Byunghyun
    2014 IEEE 13TH INTERNATIONAL SYMPOSIUM ON PARALLEL AND DISTRIBUTED COMPUTING (ISPDC), 2014, : 189 - 196
  • [48] Hybrid Cache Architecture Replacing SRAM Cache with Future Memory Technology
    Lee, Suji
    Jung, Jongpil
    Kyung, Chong-Min
    2012 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS 2012), 2012, : 2481 - 2484
  • [49] Cache or Direct Access? Revitalizing Cache in Heterogeneous Memory File System
    Liu, Yubo
    Ren, Yuxin
    Liu, Mingrui
    Guo, Hanjun
    Miao, Xie
    Hu, Xinwei
    PROCEEDINGS OF THE 2023 1ST WORKSHOP ON DISRUPTIVE MEMORY SYSTEMS, DIMES 2023, 2023, : 38 - 44
  • [50] Scheduling Page Table Walks for Irregular GPU Applications
    Shin, Seunghee
    Cox, Guilherme
    Oskin, Mark
    Loh, Gabriel H.
    Solihin, Yan
    Bhattacharjee, Abhishek
    Basu, Arkaprava
    2018 ACM/IEEE 45TH ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA), 2018, : 180 - 192