Trio-ViT: Post-Training Quantization and Acceleration for Softmax-Free Efficient Vision Transformer

被引:0
|
作者
Shi, Huihong [1 ]
Shao, Haikuo [1 ]
Mao, Wendong [2 ]
Wang, Zhongfeng [1 ,3 ]
机构
[1] Nanjing Univ, Sch Elect Sci & Engn, Nanjing 210023, Peoples R China
[2] Sun Yat sen Univ, Sch Integrated Circuits, Shenzhen 510275, Peoples R China
[3] Sun Yat sen Univ, Sch Integrated Circuits, Shenzhen 518107, Peoples R China
关键词
Quantization (signal); Hardware; Transformers; Standards; Accuracy; Computer vision; Computational modeling; Computational complexity; Engines; Computational efficiency; Post-training quantization; hardware acceleration; transformer; softmax-free efficient vision transformer;
D O I
10.1109/TCSI.2024.3485192
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Motivated by the huge success of Transformers in the field of natural language processing (NLP), Vision Transformers (ViTs) have been rapidly developed and achieved remarkable performance in various computer vision tasks. However, their huge model sizes and intensive computations hinder ViTs' deployment on embedded devices, calling for effective model compression methods, such as quantization. Unfortunately, due to the existence of hardware-unfriendly and quantization-sensitive non-linear operations, particularly Softmax, it is non-trivial to completely quantize all operations in ViTs, yielding either significant accuracy drops or non-negligible hardware costs. In response to challenges associated with standard ViTs, we focus our attention towards the quantization and acceleration for efficient ViTs, which not only eliminate the troublesome Soft- max but also integrate linear attention with low computational complexity, and propose Trio-ViT accordingly. Specifically, at the algorithm level, we develop a tailored post-training quantization engine taking the unique activation distributions of Softmax-free efficient ViTs into full consideration, aiming to boost quantization accuracy. Furthermore, at the hardware level, we build an accelerator dedicated to the specific Convolution-Transformer hybrid architecture of efficient ViTs, thereby enhancing hardware efficiency. Extensive experimental results consistently prove the effectiveness of our Trio-ViT framework. Particularly, we can gain up to up arrow 3.6 x up arrow 5.0 x, and up arrow 7.3 x FPS under comparable accuracy over state-of-the-art ViT accelerators, as well as up arrow 6.0 x up arrow 1.5 x and up arrow 2.1 x DSP efficiency.
引用
收藏
页码:1296 / 1307
页数:12
相关论文
共 25 条
  • [1] Post-Training Quantization for Vision Transformer
    Liu, Zhenhua
    Wang, Yunhe
    Han, Kai
    Zhang, Wei
    Ma, Siwei
    Gao, Wen
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [2] P2 -ViT: Power-of-Two Post-Training Quantization and Acceleration for Fully Quantized Vision Transformer
    Shi, Huihong
    Cheng, Xin
    Mao, Wendong
    Wang, Zhongfeng
    IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2024, 32 (09) : 1704 - 1717
  • [3] Towards Accurate Post-Training Quantization for Vision Transformer
    Ding, Yifu
    Qin, Haotong
    Yan, Qinghua
    Chai, Zhenhua
    Liu, Junjie
    Wei, Xiaolin
    Liu, Xianglong
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 5380 - 5388
  • [4] POST-TRAINING QUANTIZATION FOR VISION TRANSFORMER IN TRANSFORMED DOMAIN
    Feng, Kai
    Chen, Zhuo
    Gao, Fei
    Wang, Zhe
    Xu, Long
    Lin, Weisi
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 1457 - 1462
  • [5] PTQ4ViT: Post-training Quantization for Vision Transformers with Twin Uniform Quantization
    Yuan, Zhihang
    Xue, Chenhao
    Chen, Yiqi
    Wu, Qiang
    Sun, Guangyu
    COMPUTER VISION, ECCV 2022, PT XII, 2022, 13672 : 191 - 207
  • [6] RepQ-ViT: Scale Reparameterization for Post-Training Quantization of Vision Transformers
    Li, Zhikai
    Xiao, Junrui
    Yang, Lianwei
    Gu, Qingyi
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 17181 - 17190
  • [7] FGPTQ-ViT: Fine-Grained Post-training Quantization for Vision Transformers
    Liu, Caihua
    Shi, Hongyang
    He, Xinyu
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT IX, 2024, 14433 : 79 - 90
  • [8] ADFQ-ViT: Activation-Distribution-Friendly post-training Quantization for Vision Transformers
    Jiang, Yanfeng
    Sun, Ning
    Xie, Xueshuo
    Yang, Fei
    Li, Tao
    NEURAL NETWORKS, 2025, 186
  • [9] AGQB-ViT: Adaptive granularity quantizer with bias for post-training quantization of Vision Transformers
    Huo, Ying
    Kang, Yongqiang
    Yang, Dawei
    Zhu, Jiahao
    NEUROCOMPUTING, 2025, 637
  • [10] CLAMP-ViT: Contrastive Data-Free Learning for Adaptive Post-training Quantization of ViTs
    Ramachandran, Akshat
    Kundu, Souvik
    Krishna, Tushar
    COMPUTER VISION - ECCV 2024, PT LXVII, 2025, 15125 : 307 - 325