An Efficient GPU Cache Architecture for Applications with Irregular Memory Access Patterns

被引:10
|
作者
Li, Bingchao [1 ]
Wei, Jizeng [2 ]
Sun, Jizhou [2 ]
Annavaram, Murali [3 ]
Kim, Nam Sung [4 ,5 ]
机构
[1] Civil Aviat Univ China, Sch Comp Sci & Technol, 2898 Jinbei Rd, Tianjin 300300, Peoples R China
[2] Tianjin Univ, Coll Intelligence & Comp, 135 Yaguan Rd,Haihe Educ Pk, Tianjin 300350, Peoples R China
[3] Univ Southern Calif, Dept Elect & Comp Engn, Hughes Aircraft Elect Engn Ctr, 3740 McClintock Ave, Los Angeles, CA 90089 USA
[4] Univ Illinois, Urbana, IL USA
[5] Coordinated Sci Lab, Dept Elect & Comp Engn, 1308 West Main St, Urbana, IL 61801 USA
基金
美国国家科学基金会; 中国国家自然科学基金;
关键词
GPU; cache; shared memory; thread;
D O I
10.1145/3322127
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
GPUs provide high-bandwidth/low-latency on-chip shared memory and L1 cache to efficiently service a large number of concurrent memory requests. Specifically, concurrent memory requests accessing contiguous memory space are coalesced into warp-wide accesses. To support such large accesses to L1 cache with low latency, the size of L1 cache line is no smaller than that of warp-wide accesses. However, such L1 cache architecture cannot always be efficiently utilized when applications generate many memory requests with irregular access patterns especially due to branch and memory divergences that make requests uncoalesced and small. Furthermore, unlike L1 cache, the shared memory of GPUs is not often used in many applications, which essentially depends on programmers. In this article, we propose Elastic-Cache, which can efficiently support both fine- and coarse-grained L1 cache line management for applications with both regular and irregular memory access patterns to improve the L1 cache efficiency. Specifically, it can store 32- or 64-byte words in non-contiguous memory space to a single 128-byte cache line. Furthermore, it neither requires an extra memory structure nor reduces the capacity of L1 cache for tag storage, since it stores auxiliary tags for fine-grained L1 cache line managements in the shared memory space that is not fully used in many applications. To improve the bandwidth utilization of L1 cache with Elastic-Cache for fine-grained accesses, we further propose Elastic-Plus to issue 32-byte memory requests in parallel, which can reduce the processing latency of memory instructions and improve the throughput of GPUs. Our experiment result shows that Elastic-Cache improves the geometric-mean performance of applications with irregular memory access patterns by 104% without degrading the performance of applications with regular memory access patterns. Elastic-Plus outperforms Elastic-Cache and improves the performance of applications with irregular memory access patterns by 131%.
引用
收藏
页码:1 / 24
页数:24
相关论文
共 50 条
  • [21] Cascaded DMA Controller for Speedup of Indirect Memory Access in Irregular Applications
    Kashimata, Tomoya
    Kitamura, Toshiaki
    Kimura, Keiji
    Kasahara, Hironori
    2019 IEEE/ACM 9TH WORKSHOP ON IRREGULAR APPLICATIONS - ARCHITECTURES AND ALGORITHMS (IA3), 2019, : 71 - 76
  • [22] Hybrid Access Cache Indexing Framework Adapted to GPU
    Zhang H.-J.
    Wu Y.-J.
    Zhang H.
    Zhang L.-B.
    Ruan Jian Xue Bao/Journal of Software, 2020, 31 (10): : 3038 - 3055
  • [23] Pushing the Limits of Irregular Access Patterns on Emerging Network Architecture: A Case Study
    Gioiosa, Roberto
    Warfel, Thomas
    Tumeo, Antonino
    Friese, Ryan
    2017 IEEE INTERNATIONAL CONFERENCE ON CLUSTER COMPUTING (CLUSTER), 2017, : 874 - 881
  • [24] Segmented bitline cache: Exploiting non-uniform memory access patterns
    Rao, Ravishankar
    Wenck, Justin
    Franklin, Diana
    Amirtharajah, Rajeevan
    Akella, Venkatesh
    HIGH PERFORMANCE COMPUTING - HIPC 2006, PROCEEDINGS, 2006, 4297 : 123 - +
  • [25] An Efficient Architecture for Improved Reliability of Cache Memory Using Same Tag Bits
    Archana, M.
    Kashwan, K. R.
    2016 5TH INTERNATIONAL CONFERENCE ON RECENT TRENDS IN INFORMATION TECHNOLOGY (ICRTIT), 2016,
  • [26] An Adaptive Hybrid OLAP Architecture with optimized memory access patterns
    Lubomir Riha
    Maria Malik
    Tarek El-Ghazawi
    Cluster Computing, 2013, 16 : 663 - 677
  • [27] An Adaptive Hybrid OLAP Architecture with optimized memory access patterns
    Riha, Lubomir
    Malik, Maria
    El-Ghazawi, Tarek
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2013, 16 (04): : 663 - 677
  • [28] Dynamic scratch-pad memory management for irregular array access patterns
    Chen, G.
    Ozturk, O.
    Kandemir, M.
    Karakoy, M.
    2006 DESIGN AUTOMATION AND TEST IN EUROPE, VOLS 1-3, PROCEEDINGS, 2006, : 929 - +
  • [29] A multilevel cache memory architecture for nanoelectronics
    Crawley, D
    NINTH GREAT LAKES SYMPOSIUM ON VLSI, PROCEEDINGS, 1999, : 346 - 347
  • [30] Accelerating computation of Euclidean distance map using the GPU with efficient memory access
    Man, Duhu
    Uda, Kenji
    Ito, Yasuaki
    Nakano, Koji
    INTERNATIONAL JOURNAL OF PARALLEL EMERGENT AND DISTRIBUTED SYSTEMS, 2013, 28 (05) : 383 - 406