Mixed-Precision Neural Network Quantization via Learned Layer-Wise Importance

被引:17
|
作者
Tang, Chen [1 ]
Ouyang, Kai [1 ]
Wang, Zhi [1 ,4 ]
Zhu, Yifei [2 ]
Ji, Wen [3 ,4 ]
Wang, Yaowei [4 ]
Zhu, Wenwu [1 ]
机构
[1] Tsinghua Univ, Beijing, Peoples R China
[2] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
[3] Chinese Acad Sci, Inst Comp Technol, Beijing, Peoples R China
[4] Peng Cheng Lab, Shenzhen, Peoples R China
来源
基金
北京市自然科学基金;
关键词
Mixed-precision quantization; Model compression;
D O I
10.1007/978-3-031-20083-0_16
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The exponentially large discrete search space in mixed-precision quantization (MPQ) makes it hard to determine the optimal bit-width for each layer. Previous works usually resort to iterative search methods on the training set, which consume hundreds or even thousands of GPU-hours. In this study, we reveal that some unique learnable parameters in quantization, namely the scale factors in the quantizer, can serve as importance indicators of a layer, reflecting the contribution of that layer to the final accuracy at certain bit-widths. These importance indicators naturally perceive the numerical transformation during quantization-aware training, which can precisely provide quantization sensitivity metrics of layers. However, a deep network always contains hundreds of such indicators, and training them one by one would lead to an excessive time cost. To overcome this issue, we propose a joint training scheme that can obtain all indicators at once. It considerably speeds up the indicators training process by parallelizing the original sequential training processes. With these learned importance indicators, we formulate the MPQ search problem as a one-time integer linear programming (ILP) problem. That avoids the iterative search and significantly reduces search time without limiting the bit-width search space. For example, MPQ search on ResNet18 with our indicators takes only 0.06 s, which improves time efficiency exponentially compared to iterative search methods. Also, extensive experiments show our approach can achieve SOTA accuracy on ImageNet for far-ranging models with various constraints (e.g., BitOps, compress rate).
引用
收藏
页码:259 / 275
页数:17
相关论文
共 50 条
  • [1] Design Space Exploration of Layer-Wise Mixed-Precision Quantization with Tightly Integrated Edge Inference Units
    Zhao, Xiaotian
    Gao, Yimin
    Verma, Vaibhav
    Xu, Ruge
    Stan, Mircea
    Guo, Xinfei
    [J]. PROCEEDINGS OF THE GREAT LAKES SYMPOSIUM ON VLSI 2023, GLSVLSI 2023, 2023, : 467 - 471
  • [2] Edge-MPQ: Layer-Wise Mixed-Precision Quantization With Tightly Integrated Versatile Inference Units for Edge Computing
    Zhao, Xiaotian
    Xu, Ruge
    Gao, Yimin
    Verma, Vaibhav
    Stan, Mircea R.
    Guo, Xinfei
    [J]. IEEE Transactions on Computers, 2024, 73 (11) : 2504 - 2519
  • [3] SQNR-based Layer-wise Mixed-Precision Schemes with Computational Complexity Consideration
    Kim, Ha-Na
    Eun, Hyun
    Choi, Jung Hwan
    Kim, Ji-Hoon
    [J]. 2022 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS 22), 2022, : 234 - 235
  • [4] Deep Neural Network Quantization via Layer-Wise Optimization Using Limited Training Data
    Chen, Shangyu
    Wang, Wenya
    Pan, Sinno Jialin
    [J]. THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 3329 - 3336
  • [5] Towards Mixed-Precision Quantization of Neural Networks via Constrained Optimization
    Chen, Weihan
    Wang, Peisong
    Cheng, Jian
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 5330 - 5339
  • [6] Mixed-precision Deep Neural Network Quantization With Multiple Compression Rates
    Wang, Xuanda
    Fei, Wen
    Dai, Wenrui
    Li, Chenglin
    Zou, Junni
    Xiong, Hongkai
    [J]. 2023 DATA COMPRESSION CONFERENCE, DCC, 2023, : 371 - 371
  • [7] EVOLUTIONARY QUANTIZATION OF NEURAL NETWORKS WITH MIXED-PRECISION
    Liu, Zhenhua
    Zhang, Xinfeng
    Wang, Shanshe
    Ma, Siwei
    Gao, Wen
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 2785 - 2789
  • [8] Patch-wise Mixed-Precision Quantization of Vision Transformer
    Xiao, Junrui
    Li, Zhikai
    Yang, Lianwei
    Gu, Qingyi
    [J]. 2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [9] Complexity-Aware Layer-Wise Mixed-Precision Schemes With SQNR-Based Fast Analysis
    Kim, Hana
    Eun, Hyun
    Choi, Jung Hwan
    Kim, Ji-Hoon
    [J]. IEEE ACCESS, 2023, 11 : 117800 - 117809
  • [10] AutoMPQ: Automatic Mixed-Precision Neural Network Search via Few-Shot Quantization Adapter
    Xu, Ke
    Shao, Xiangyang
    Tian, Ye
    Yang, Shangshang
    Zhang, Xingyi
    [J]. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024,