An Efficient GPU Cache Architecture for Applications with Irregular Memory Access Patterns

被引:10
|
作者
Li, Bingchao [1 ]
Wei, Jizeng [2 ]
Sun, Jizhou [2 ]
Annavaram, Murali [3 ]
Kim, Nam Sung [4 ,5 ]
机构
[1] Civil Aviat Univ China, Sch Comp Sci & Technol, 2898 Jinbei Rd, Tianjin 300300, Peoples R China
[2] Tianjin Univ, Coll Intelligence & Comp, 135 Yaguan Rd,Haihe Educ Pk, Tianjin 300350, Peoples R China
[3] Univ Southern Calif, Dept Elect & Comp Engn, Hughes Aircraft Elect Engn Ctr, 3740 McClintock Ave, Los Angeles, CA 90089 USA
[4] Univ Illinois, Urbana, IL USA
[5] Coordinated Sci Lab, Dept Elect & Comp Engn, 1308 West Main St, Urbana, IL 61801 USA
基金
美国国家科学基金会; 中国国家自然科学基金;
关键词
GPU; cache; shared memory; thread;
D O I
10.1145/3322127
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
GPUs provide high-bandwidth/low-latency on-chip shared memory and L1 cache to efficiently service a large number of concurrent memory requests. Specifically, concurrent memory requests accessing contiguous memory space are coalesced into warp-wide accesses. To support such large accesses to L1 cache with low latency, the size of L1 cache line is no smaller than that of warp-wide accesses. However, such L1 cache architecture cannot always be efficiently utilized when applications generate many memory requests with irregular access patterns especially due to branch and memory divergences that make requests uncoalesced and small. Furthermore, unlike L1 cache, the shared memory of GPUs is not often used in many applications, which essentially depends on programmers. In this article, we propose Elastic-Cache, which can efficiently support both fine- and coarse-grained L1 cache line management for applications with both regular and irregular memory access patterns to improve the L1 cache efficiency. Specifically, it can store 32- or 64-byte words in non-contiguous memory space to a single 128-byte cache line. Furthermore, it neither requires an extra memory structure nor reduces the capacity of L1 cache for tag storage, since it stores auxiliary tags for fine-grained L1 cache line managements in the shared memory space that is not fully used in many applications. To improve the bandwidth utilization of L1 cache with Elastic-Cache for fine-grained accesses, we further propose Elastic-Plus to issue 32-byte memory requests in parallel, which can reduce the processing latency of memory instructions and improve the throughput of GPUs. Our experiment result shows that Elastic-Cache improves the geometric-mean performance of applications with irregular memory access patterns by 104% without degrading the performance of applications with regular memory access patterns. Elastic-Plus outperforms Elastic-Cache and improves the performance of applications with irregular memory access patterns by 131%.
引用
收藏
页码:1 / 24
页数:24
相关论文
共 50 条
  • [11] Energy-efficient cache architecture for multimedia applications
    Yang, CL
    Lee, CH
    Tseng, HW
    2005 Emerging Information Technology Conference (EITC), 2005, : 165 - 166
  • [12] Elastic-Cache: GPU Cache Architecture for Efficient Fine- and Coarse-Grained Cache-Line Management
    Li, Bingchao
    Sun, Jizhou
    Annavaram, Murali
    Kim, Nam Sung
    2017 31ST IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM (IPDPS), 2017, : 82 - 91
  • [13] Cellular Automata model tuned for efficient computation on GPU with global memory cache
    Topa, Pawel
    2014 22ND EUROMICRO INTERNATIONAL CONFERENCE ON PARALLEL, DISTRIBUTED, AND NETWORK-BASED PROCESSING (PDP 2014), 2014, : 380 - 383
  • [14] MAPA: An Automatic Memory Access Pattern Analyzer for GPU Applications
    Jo, Gangwon
    Jung, Jaehoon
    Park, Jiyoung
    Lee, Jaejin
    ACM SIGPLAN NOTICES, 2017, 52 (08) : 443 - 444
  • [15] Memory Access Patterns: The Missing Piece of the Multi-GPU Puzzle
    Ben-Nun, Tal
    Levy, Ely
    Barak, Amnon
    Rubin, Eri
    PROCEEDINGS OF SC15: THE INTERNATIONAL CONFERENCE FOR HIGH PERFORMANCE COMPUTING, NETWORKING, STORAGE AND ANALYSIS, 2015,
  • [16] RECONFIGURABLE CACHE MEMORY ARCHITECTURE FOR INTEGRAL IMAGE AND INTEGRAL HISTOGRAM APPLICATIONS
    Hsu, Po-Hao
    Chien, Shao-Yi
    2011 IEEE WORKSHOP ON SIGNAL PROCESSING SYSTEMS (SIPS), 2011, : 151 - 156
  • [17] Cache Capacity Aware Thread Scheduling for Irregular Memory Access on Many-Core GPGPUs
    Kuo, Hsien-Kai
    Yen, Ta-Kan
    Lai, Bo-Cheng Charles
    Jou, Jing-Yang
    2013 18TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE (ASP-DAC), 2013, : 338 - 343
  • [18] Memory access pattern analysis and stream cache design for multimedia applications
    Lee, J
    Park, C
    Ha, S
    ASP-DAC 2003: PROCEEDINGS OF THE ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE, 2003, : 22 - 27
  • [19] EFFICIENT SUPPORT FOR IRREGULAR APPLICATIONS ON DISTRIBUTED-MEMORY MACHINES
    MUKHERJEE, SS
    SHARMA, SD
    HILL, MD
    LARUS, JR
    ROGERS, A
    SALTZ, J
    SIGPLAN NOTICES, 1995, 30 (08): : 68 - 79
  • [20] Paged cache: An efficient partition architecture for reducing power, area and access time
    Chang, YJ
    Lai, FP
    APCCAS 2002: ASIA-PACIFIC CONFERENCE ON CIRCUITS AND SYSTEMS, VOL 2, PROCEEDINGS, 2002, : 473 - 478