VEGETA: Vertically-Integrated Extensions for Sparse/Dense GEMM Tile Acceleration on CPUs

被引:4
|
作者
Jeong, Geonhwa [1 ]
Damani, Sana [1 ,3 ]
Bambhaniya, Abhimanyu Rajeshkumar [1 ]
Qin, Eric [1 ,4 ]
Hughes, Christopher J. [2 ]
Subramoney, Sreenivas [2 ]
Kim, Hyesoon [1 ]
Krishna, Tushar [1 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
[2] Intel Labs, Hillsboro, OR USA
[3] NVIDIA, Santa Clara, CA USA
[4] Meta, Menlo Pk, CA USA
关键词
D O I
10.1109/HPCA56546.2023.10071058
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep Learning (DL) acceleration support in CPUs has recently gained a lot of traction, with several companies (Arm, Intel, IBM) announcing products with specialized matrix engines accessible via GEMM instructions. CPUs are pervasive and need to handle diverse requirements across DL workloads running in edge/HPC/cloud platforms. Therefore, as DL workloads embrace sparsity to reduce the computations and memory size of models, it is also imperative for CPUs to add support for sparsity to avoid under-utilization of the dense matrix engine and inefficient usage of the caches and registers. This work presents VEGETA, a set of ISA and microarchitecture extensions over dense matrix engines to support flexible structured sparsity for CPUs, enabling programmable support for diverse DL models with varying degrees of sparsity. Compared to the state-of-the-art (SOTA) dense matrix engine in CPUs, a VEGETA engine provides 1.09x, 2.20x, 3.74x, and 3.28x speed-ups when running 4:4 (dense), 2:4, 1:4, and unstructured (95%) sparse DNN layers.
引用
收藏
页码:259 / 272
页数:14
相关论文
empty
未找到相关数据