Trio-ViT: Post-Training Quantization and Acceleration for Softmax-Free Efficient Vision Transformer

被引:0
|
作者
Shi, Huihong [1 ]
Shao, Haikuo [1 ]
Mao, Wendong [2 ]
Wang, Zhongfeng [1 ,3 ]
机构
[1] Nanjing Univ, Sch Elect Sci & Engn, Nanjing 210023, Peoples R China
[2] Sun Yat sen Univ, Sch Integrated Circuits, Shenzhen 510275, Peoples R China
[3] Sun Yat sen Univ, Sch Integrated Circuits, Shenzhen 518107, Peoples R China
关键词
Quantization (signal); Hardware; Transformers; Standards; Accuracy; Computer vision; Computational modeling; Computational complexity; Engines; Computational efficiency; Post-training quantization; hardware acceleration; transformer; softmax-free efficient vision transformer;
D O I
10.1109/TCSI.2024.3485192
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Motivated by the huge success of Transformers in the field of natural language processing (NLP), Vision Transformers (ViTs) have been rapidly developed and achieved remarkable performance in various computer vision tasks. However, their huge model sizes and intensive computations hinder ViTs' deployment on embedded devices, calling for effective model compression methods, such as quantization. Unfortunately, due to the existence of hardware-unfriendly and quantization-sensitive non-linear operations, particularly Softmax, it is non-trivial to completely quantize all operations in ViTs, yielding either significant accuracy drops or non-negligible hardware costs. In response to challenges associated with standard ViTs, we focus our attention towards the quantization and acceleration for efficient ViTs, which not only eliminate the troublesome Soft- max but also integrate linear attention with low computational complexity, and propose Trio-ViT accordingly. Specifically, at the algorithm level, we develop a tailored post-training quantization engine taking the unique activation distributions of Softmax-free efficient ViTs into full consideration, aiming to boost quantization accuracy. Furthermore, at the hardware level, we build an accelerator dedicated to the specific Convolution-Transformer hybrid architecture of efficient ViTs, thereby enhancing hardware efficiency. Extensive experimental results consistently prove the effectiveness of our Trio-ViT framework. Particularly, we can gain up to up arrow 3.6 x up arrow 5.0 x, and up arrow 7.3 x FPS under comparable accuracy over state-of-the-art ViT accelerators, as well as up arrow 6.0 x up arrow 1.5 x and up arrow 2.1 x DSP efficiency.
引用
收藏
页码:1296 / 1307
页数:12
相关论文
共 25 条
  • [21] EfficientQ: An efficient and accurate post-training neural network quantization method for medical image segmentation
    Zhang, Rongzhao
    Chung, Albert C. S.
    MEDICAL IMAGE ANALYSIS, 2024, 97
  • [22] Auto-ViT-Acc: An FPGA-Aware Automatic Acceleration Framework for Vision Transformer with Mixed-Scheme Quantization
    Li, Zhengang
    Sun, Mengshu
    Lu, Alec
    Ma, Haoyu
    Yuan, Geng
    Xie, Yanyue
    Tang, Hao
    Li, Yanyu
    Leeser, Miriam
    Wang, Zhangyang
    Lin, Xue
    Fang, Zhenman
    2022 32ND INTERNATIONAL CONFERENCE ON FIELD-PROGRAMMABLE LOGIC AND APPLICATIONS, FPL, 2022, : 109 - 116
  • [23] ASLog: An Area-Efficient CNN Accelerator for Per-Channel Logarithmic Post-Training Quantization
    Xu, Jiawei
    Fan, Jiangshan
    Nan, Baolin
    Ding, Chen
    Zheng, Li-Rong
    Zou, Zhuo
    Huan, Yuxiang
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2023, 70 (12) : 5380 - 5393
  • [24] Q-HyViT: Post-Training Quantization of Hybrid Vision Transformers With Bridge Block Reconstruction for IoT Systems
    Lee, Jemin
    Kwon, Yongin
    Park, Sihyeong
    Yu, Misun
    Park, Jeman
    Song, Hwanjun
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (22): : 36384 - 36396
  • [25] AE-Qdrop: Towards Accurate and Efficient Low-Bit Post-Training Quantization for A Convolutional Neural Network
    Li, Jixing
    Chen, Gang
    Jin, Min
    Mao, Wenyu
    Lu, Huaxiang
    ELECTRONICS, 2024, 13 (03)