An Algorithm-Hardware Co-Optimized Framework for Accelerating N:M Sparse Transformers

被引:19
|
作者
Fang, Chao [1 ]
Zhou, Aojun [2 ]
Wang, Zhongfeng [1 ]
机构
[1] Nanjing Univ, Sch Elect Sci & Engn, Nanjing 210008, Peoples R China
[2] Chinese Univ Hong Kong CUHK, CUHK Sensetime Joint Lab, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Algorithm-hardware codesign; hardware accelerator; model compression; pruning; Transformer; CNN ACCELERATOR; EFFICIENT;
D O I
10.1109/TVLSI.2022.3197282
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The Transformer has been an indispensable staple in deep learning. However, for real-life applications, it is very challenging to deploy efficient Transformers due to the immense parameters and operations of models. To relieve this burden, exploiting sparsity is an effective approach to accelerate Transformers. Newly emerging Ampere graphics processing units (GPUs) leverage a 2:4 sparsity pattern to achieve model acceleration, while it can hardly meet the diverse algorithm and hardware constraints when deploying models. By contrast, we propose an algorithm-hardware co-optimized framework to flexibly and efficiently accelerate Transformers by utilizing general N:M sparsity patterns. First, from an algorithm perspective, we propose a sparsity inheritance mechanism along with inherited dynamic pruning (IDP) to obtain a series of N:M sparse candidate Transformers rapidly. A model compression scheme is further proposed to significantly reduce the storage requirement for deployment. Second, from a hardware perspective, we present a flexible and efficient hardware architecture, namely, STA, to achieve significant speedup when deploying N:M sparse Transformers. STA features not only a computing engine unifying both sparse-dense and dense-dense matrix multiplications with high computational efficiency but also a scalable softmax module eliminating the latency from intermediate off-chip data communication. Experimental results show that, compared to other methods, N:M sparse Transformers, generated using IDP, achieves an average of 6.7% improvement on accuracy with high training efficiency. Moreover, STA can achieve 14.47x and 11.33x speedups compared to Intel i9-9900X and NVIDIA RTX 2080 Ti, respectively, and perform 2.00 similar to 19.47x faster inference than the state-of-the-art field-programmable gate array (FPGA)-based accelerators for Transformers.
引用
收藏
页码:1573 / 1586
页数:14
相关论文
共 50 条
  • [21] ASBNN: Acceleration of Bayesian Convolutional Neural Networks by Algorithm-hardware Co-design
    Fujiwara, Yoshiki
    Takamaeda-Yamazaki, Shinya
    [J]. 2021 IEEE 32ND INTERNATIONAL CONFERENCE ON APPLICATION-SPECIFIC SYSTEMS, ARCHITECTURES AND PROCESSORS (ASAP 2021), 2021, : 226 - 233
  • [22] CSCNN: Algorithm-hardware Co-design for CNN Accelerators using Centrosymmetric Filters
    Li, Jiajun
    Louri, Ahmed
    Karanth, Avinash
    Bunescu, Razvan
    [J]. 2021 27TH IEEE INTERNATIONAL SYMPOSIUM ON HIGH-PERFORMANCE COMPUTER ARCHITECTURE (HPCA 2021), 2021, : 612 - 625
  • [23] SparseHD: Algorithm-Hardware Co-Optimization for Efficient High-Dimensional Computing
    Imani, Mohsen
    Salamat, Sahand
    Khaleghi, Behnam
    Samragh, Mohammad
    Koushanfar, Farinaz
    Rosing, Tajana
    [J]. 2019 27TH IEEE ANNUAL INTERNATIONAL SYMPOSIUM ON FIELD-PROGRAMMABLE CUSTOM COMPUTING MACHINES (FCCM), 2019, : 190 - 198
  • [24] A Low-Complexity Sparse LMS Algorithm Optimized for Hardware Implementation
    Meng, Jin
    Zhang, Hongsheng
    Yi, Shenghong
    Liu, Ting
    Gan, Jizhang
    Yang, Hong
    [J]. CIRCUITS SYSTEMS AND SIGNAL PROCESSING, 2023, 42 (02) : 971 - 995
  • [25] A Low-Complexity Sparse LMS Algorithm Optimized for Hardware Implementation
    Jin Meng
    Hongsheng Zhang
    Shenghong Yi
    Ting Liu
    Jizhang Gan
    Hong Yang
    [J]. Circuits, Systems, and Signal Processing, 2023, 42 : 971 - 995
  • [26] Open the box of digital neuromorphic processor: Towards effective algorithm-hardware co-design
    Tang, Guangzhi
    Safa, Ali
    Shidqi, Kevin
    Detterer, Paul
    Traferro, Stefano
    Konijnenburg, Mario
    Sifalakis, Manolis
    van Schaik, Gert-Jan
    Yousefzadeh, Amirreza
    [J]. 2023 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS, 2023,
  • [27] An Efficient Flood Detection Method With Satellite Images Based on Algorithm-Hardware Co-Design
    Pan, Dingwei
    Wang, Zhihao
    Wang, Xueqian
    Li, Gang
    Zeng, Shulin
    Wang, Yu
    [J]. IEEE Geoscience and Remote Sensing Letters, 2024, 21
  • [28] CoGNN: An Algorithm-Hardware Co-Design Approach to Accelerate GNN Inference With Minibatch Sampling
    Zhong, Kai
    Zeng, Shulin
    Hou, Wentao
    Dai, Guohao
    Zhu, Zhenhua
    Zhang, Xuecang
    Xiao, Shihai
    Yang, Huazhong
    Wang, Yu
    [J]. IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2023, 42 (12) : 4883 - 4896
  • [29] Algorithm-Hardware Co-Design of Single Shot Detector for Fast Object Detection on FPGAs
    Ma, Yufei
    Zheng, Tu
    Cao, Yu
    Vrudhula, Sarma
    Seo, Jae-sun
    [J]. 2018 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN (ICCAD) DIGEST OF TECHNICAL PAPERS, 2018,
  • [30] Algorithm-Hardware Co-Design for Efficient Brain-Inspired Hyperdimensional Learning on Edge
    Ni, Yang
    Kim, Yeseong
    Rosing, Tajana
    Imani, Mohsen
    [J]. PROCEEDINGS OF THE 2022 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2022), 2022, : 292 - 297