Future Scaling of Memory Hierarchy for Tensor Cores and Eliminating Redundant Shared Memory Traffic Using Inter-Warp Multicasting

被引:1
|
作者
Lee, Sunjung [1 ]
Hwang, Seunghwan [1 ]
Kim, Michael Jaemin [1 ]
Choi, Jaewan [1 ]
Ahn, Jung Ho [2 ,3 ]
机构
[1] Seoul Natl Univ SNU, Dept Intelligence & Informat, Seoul 08826, South Korea
[2] Seoul Natl Univ SNU, Res Inst Convergence Sci, Dept Intelligence & Informat, Interdisciplinary Program Artificial Intelligence, Seoul 08826, South Korea
[3] Seoul Natl Univ SNU, Inst Comp Technol, Seoul 08826, South Korea
关键词
GPU performance; deep neural network; tensor core; inter-warp multicasting; PERFORMANCE;
D O I
10.1109/TC.2022.3207134
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The CUDA core of NVIDIA GPUs had been one of the most efficient computation units for parallel computing. However, recent rapid developments in deep neural networks demand an even higher level of computational performance. To meet this requirement, NVIDIA has introduced the Tensor core in recent generations. However, their impressive enhancements in computational performance have newly brought high pressure on the memory hierarchy. In this paper, first we identify the required memory bandwidth in the memory hierarchy as the computational performance increases in actual GPU hardware. Through a comparison of the CUDA core and the Tensor core in V100, we find that the tremendous performance increase of the Tensor core requires much higher memory bandwidth than that in the CUDA core. Moreover, we thoroughly investigate memory bandwidth requirement over Tensor core generations of V100, RTX TITAN, and A100. Lastly, we analyze a hypothetical next-generation Tensor core introduced by NVIDIA through a GPU simulation, through which we propose an inter-warp multicasting microarchitecture that reduces redundant shared memory (SMEM) traffic during the GEMM process. Our evaluation shows that inter-warp multicasting reduces the SMEM bandwidth pressure by 33% and improves the performance by 19% on average in all layers of ResNet-152 and BERT-Large.
引用
收藏
页码:3115 / 3126
页数:12
相关论文
empty
未找到相关数据