An Efficient GPU Cache Architecture for Applications with Irregular Memory Access Patterns

被引:10
|
作者
Li, Bingchao [1 ]
Wei, Jizeng [2 ]
Sun, Jizhou [2 ]
Annavaram, Murali [3 ]
Kim, Nam Sung [4 ,5 ]
机构
[1] Civil Aviat Univ China, Sch Comp Sci & Technol, 2898 Jinbei Rd, Tianjin 300300, Peoples R China
[2] Tianjin Univ, Coll Intelligence & Comp, 135 Yaguan Rd,Haihe Educ Pk, Tianjin 300350, Peoples R China
[3] Univ Southern Calif, Dept Elect & Comp Engn, Hughes Aircraft Elect Engn Ctr, 3740 McClintock Ave, Los Angeles, CA 90089 USA
[4] Univ Illinois, Urbana, IL USA
[5] Coordinated Sci Lab, Dept Elect & Comp Engn, 1308 West Main St, Urbana, IL 61801 USA
基金
美国国家科学基金会; 中国国家自然科学基金;
关键词
GPU; cache; shared memory; thread;
D O I
10.1145/3322127
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
GPUs provide high-bandwidth/low-latency on-chip shared memory and L1 cache to efficiently service a large number of concurrent memory requests. Specifically, concurrent memory requests accessing contiguous memory space are coalesced into warp-wide accesses. To support such large accesses to L1 cache with low latency, the size of L1 cache line is no smaller than that of warp-wide accesses. However, such L1 cache architecture cannot always be efficiently utilized when applications generate many memory requests with irregular access patterns especially due to branch and memory divergences that make requests uncoalesced and small. Furthermore, unlike L1 cache, the shared memory of GPUs is not often used in many applications, which essentially depends on programmers. In this article, we propose Elastic-Cache, which can efficiently support both fine- and coarse-grained L1 cache line management for applications with both regular and irregular memory access patterns to improve the L1 cache efficiency. Specifically, it can store 32- or 64-byte words in non-contiguous memory space to a single 128-byte cache line. Furthermore, it neither requires an extra memory structure nor reduces the capacity of L1 cache for tag storage, since it stores auxiliary tags for fine-grained L1 cache line managements in the shared memory space that is not fully used in many applications. To improve the bandwidth utilization of L1 cache with Elastic-Cache for fine-grained accesses, we further propose Elastic-Plus to issue 32-byte memory requests in parallel, which can reduce the processing latency of memory instructions and improve the throughput of GPUs. Our experiment result shows that Elastic-Cache improves the geometric-mean performance of applications with irregular memory access patterns by 104% without degrading the performance of applications with regular memory access patterns. Elastic-Plus outperforms Elastic-Cache and improves the performance of applications with irregular memory access patterns by 131%.
引用
收藏
页码:1 / 24
页数:24
相关论文
共 50 条
  • [31] A Memory-Access-Efficient Implementation of the Approximate String Matching Algorithm on GPU
    Nunes, Lucas S. N.
    Bordim, J. L.
    Nakano, K.
    Ito, Y.
    2016 FOURTH INTERNATIONAL SYMPOSIUM ON COMPUTING AND NETWORKING (CANDAR), 2016, : 483 - 489
  • [32] Virtual-Cache: A cache-line borrowing technique for efficient GPU cache architectures
    Li, Bingchao
    Wei, Jizeng
    Kim, Nam Sung
    MICROPROCESSORS AND MICROSYSTEMS, 2021, 85
  • [33] THE CACHE DRAM ARCHITECTURE - A DRAM WITH AN ON-CHIP CACHE MEMORY
    HIDAKA, H
    MATSUDA, Y
    ASAKURA, M
    FUJISHIMA, K
    IEEE MICRO, 1990, 10 (02) : 14 - 25
  • [34] Adaptive Page Migration for Irregular Data-intensive Applications under GPU Memory Oversubscription
    Ganguly, Debashis
    Zhang, Ziyu
    Yang, Jun
    Melhem, Rami
    2020 IEEE 34TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM IPDPS 2020, 2020, : 451 - 461
  • [35] Optimizing Memory-Compute Colocation for Irregular Applications on a Migratory Thread Architecture
    Rolinger, Thomas B.
    Krieger, Christopher D.
    Sussman, Alan
    2021 IEEE 35TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM (IPDPS), 2021, : 58 - 67
  • [36] FLECHE: An Efficient GPU Embedding Cache for Personalized Recommendations
    Xie, Minhui
    Lu, Youyou
    Lin, Jiazhen
    Wang, Qing
    Gao, Jian
    Ren, Kai
    Shu, Jiwu
    PROCEEDINGS OF THE SEVENTEENTH EUROPEAN CONFERENCE ON COMPUTER SYSTEMS (EUROSYS '22), 2022, : 402 - 416
  • [37] Efficient GPU multitasking with latency minimization and cache boosting
    Kim, Jiho
    Chu, Minsung
    Park, Yongjun
    IEICE ELECTRONICS EXPRESS, 2017, 14 (07):
  • [38] GPU architecture and applications of GPU-enabled computing
    Poole, Duncan
    ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY, 2010, 240
  • [39] Adaptive cache line strategy for irregular references on Cell architecture
    Cao Q.
    Hu C.-J.
    Zhang Y.-X.
    Zhu Y.-T.
    Jisuanji Xuebao/Chinese Journal of Computers, 2011, 34 (05): : 898 - 911
  • [40] Memory Access Algorithm for Low Energy CPU/GPU Heterogeneous Systems With Hybrid DAM/NVM Memory Architecture
    Chien, Tsai-Kan
    Chiou, Lih-Yih
    Cheng, Chieh-Wen
    Sheu, Shyh-Shyuan
    Wang, Pei-Hua
    Tsai, Ming-Jinn
    Wu, Chih-I
    2016 IEEE ASIA PACIFIC CONFERENCE ON CIRCUITS AND SYSTEMS (APCCAS), 2016, : 461 - 464