HeatViT: Hardware-Efficient Adaptive Token Pruning for Vision Transformers

被引:25
|
作者
Dong, Peiyan [1 ]
Sun, Mengshu [1 ]
Lu, Alec [2 ]
Xie, Yanyue [1 ]
Liu, Kenneth [2 ]
Kong, Zhenglun [1 ]
Meng, Xin [1 ]
Li, Zhengang [1 ]
Lin, Xue [1 ]
Fang, Zhenman [2 ]
Wang, Yanzhi [1 ]
机构
[1] Northeastern Univ, Boston, MA 02115 USA
[2] Simon Fraser Univ, Burnaby, BC, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
Vision Transformer; FPGA Accelerator; Hardware and Software Co-design; Data-level Sparsity;
D O I
10.1109/HPCA56546.2023.10071047
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
While vision transformers (ViTs) have continuously achieved new milestones in the field of computer vision, their sophisticated network architectures with high computation and memory costs have impeded their deployment on resource-limited edge devices. In this paper, we propose a hardware-efficient image-adaptive token pruning framework called HeatViT for efficient yet accurate ViT acceleration on embedded FPGAs. Based on the inherent computational patterns in ViTs, we first adopt an effective, hardware-efficient, and learnable head-evaluation token selector, which can be progressively inserted before transformer blocks to dynamically identify and consolidate the non-informative tokens from input images. Moreover, we implement the token selector on hardware by adding miniature control logic to heavily reuse existing hardware components built for the backbone ViT. To improve the hardware efficiency, we further employ 8-bit fixed-point quantization and propose polynomial approximations with regularization effect on quantization error for the frequently used nonlinear functions in ViTs. Compared to existing ViT pruning studies, under the similar computation cost, HeatViT can achieve 0.7%similar to 8.9% higher accuracy; while under the similar model accuracy, HeatViT can achieve more than 28.4%similar to 65.3% computation reduction, for various widely used ViTs, including DeiT-T, DeiT-S, DeiT-B, LV-ViT-S, and LV-ViT-M, on the ImageNet dataset. Compared to the baseline hardware accelerator, our implementations of HeatViT on the Xilinx ZCU102 FPGA achieve 3.46x similar to 4.89x speedup with a trivial resource utilization overhead of 8%similar to 11% more DSPs and 5%similar to 8% more LUTs.
引用
收藏
页码:442 / 455
页数:14
相关论文
共 50 条
  • [21] Beyond Attentive Tokens: Incorporating Token Importance and Diversity for Efficient Vision Transformers
    Long, Sifan
    Zhao, Zhen
    Pi, Jimin
    Wang, Shengsheng
    Wang, Jingdong
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 10334 - 10343
  • [22] Parallel implementation of hardware-efficient adaptive equalization for coherent PON systems
    Na Liu
    Cheng Ju
    Changhong Li
    Optical and Quantum Electronics, 2021, 53
  • [23] Parallel implementation of hardware-efficient adaptive equalization for coherent PON systems
    Liu, Na
    Ju, Cheng
    Li, Changhong
    OPTICAL AND QUANTUM ELECTRONICS, 2021, 53 (01)
  • [24] Content-aware Token Sharing for Efficient Semantic Segmentation with Vision Transformers
    Lu, Chenyang
    de Geus, Daan
    Dubbelman, Gijs
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 23631 - 23640
  • [25] A unified pruning framework for vision transformers
    Yu, Hao
    Wu, Jianxin
    SCIENCE CHINA-INFORMATION SCIENCES, 2023, 66 (07)
  • [26] Width & Depth Pruning for Vision Transformers
    Yu, Fang
    Huang, Kun
    Wang, Meng
    Cheng, Yuan
    Chu, Wei
    Cui, Li
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 3143 - 3151
  • [27] Towards Efficient Neuromorphic Hardware: Unsupervised Adaptive Neuron Pruning
    Guo, Wenzhe
    Yantir, Hasan Erdem
    Fouda, Mohammed E.
    Eltawil, Ahmed M.
    Salama, Khaled Nabil
    ELECTRONICS, 2020, 9 (07) : 1 - 15
  • [28] A unified pruning framework for vision transformers
    Hao YU
    Jianxin WU
    Science China(Information Sciences), 2023, 66 (07) : 303 - 304
  • [29] Adaptive class token knowledge distillation for efficient vision transformer
    Kang, Minchan
    Son, Sanghyeok
    Kim, Daeshik
    KNOWLEDGE-BASED SYSTEMS, 2024, 304
  • [30] AdaViT: Adaptive Vision Transformers for Efficient Image Recognition
    Meng, Lingchen
    Li, Hengduo
    Chen, Bor-Chun
    Lan, Shiyi
    Wu, Zuxuan
    Jiang, Yu-Gang
    Lim, Ser-Nam
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 12299 - 12308