ADEQ: Adaptive Diversity Enhancement for Zero-Shot Quantization

被引:0
|
作者
Chen, Xinrui [1 ]
Yan, Renao [1 ]
Cheng, Junru [1 ]
Wang, Yizhi [1 ]
Fu, Yuqiu [1 ]
Chen, Yi [1 ]
Guan, Tian [1 ]
He, Yonghong [1 ]
机构
[1] Tsinghua Univ, Shenzhen Int Grad Sch, Shenzhen, Peoples R China
基金
中国国家自然科学基金;
关键词
Zero-shot Quantization; Diversity Enhancement; Class-wise Adaptability; Layer-wise Adaptability; Inter-class Separability;
D O I
10.1007/978-981-99-8079-6_5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Zero-shot quantization (ZSQ) is an effective way to compress neural networks, especially when real training sets are inaccessible because of privacy and security issues. Most existing synthetic-data-driven zero-shot quantization methods introduce diversity enhancement to simulate the distribution of real samples. However, the adaptivity between the enhancement degree and network is neglected, i.e., whether the enhancement degree benefits different network layers and different classes, and whether it reaches the best match between the inter-class distance and intra-class diversity. Due to the absence of the metric for class-wise and layer-wise diversity, maladaptive enhancement degree run the vulnerability of mode collapse of the inter-class inseparability. To address this issue, we propose a novel zero-shot quantization method, ADEQ. For layer-wise and class-wise adaptivity, the enhancement degree of different layers is adaptively initialized with a diversity coefficient. For inter-class adaptivity, an incremental diversity enhancement strategy is proposed to achieve the trade-off between inter-class distance and intra-class diversity. Extensive experiments on the CIFAR-100 and ImageNet show that our ADEQ is observed to have advanced performance at low bit-width quantization. For example, when ResNet-18 is quantized to 3 bits, we improve top-1 accuracy by 17.78% on ImageNet compared to the advanced ARC. Code at https://github.com/dangsingrue/ADEQ.
引用
收藏
页码:53 / 64
页数:12
相关论文
共 50 条
  • [31] Underwater image enhancement based on zero-shot learning and level adjustment
    Xie, Qiang
    Gao, Xiujing
    Liu, Zhen
    Huang, Hongwu
    HELIYON, 2023, 9 (04)
  • [32] A further study on biologically inspired feature enhancement in zero-shot learning
    Xie, Zhongwu
    Cao, Weipeng
    Ming, Zhong
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2021, 12 (01) : 257 - 269
  • [33] Zero-Shot Learning for Real-Time Ultrasound Image Enhancement
    Li, Yuxuan
    Lu, Wenkai
    Monkam, Patrice
    2022 IEEE INTERNATIONAL ULTRASONICS SYMPOSIUM (IEEE IUS), 2022,
  • [34] Zero-Shot Hyperspectral Sharpening
    Dian, Renwei
    Guo, Anjing
    Li, Shutao
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (10) : 12650 - 12666
  • [35] Zero-Shot Kernel Learning
    Zhang, Hongguang
    Koniusz, Piotr
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 7670 - 7679
  • [36] Zero-shot causal learning
    Nilforoshan, Hamed
    Moor, Michael
    Roohani, Yusuf
    Chen, Yining
    Surina, Anja
    Yasunaga, Michihiro
    Oblak, Sara
    Leskovec, Jure
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [37] Zero-Shot Object Counting
    Xu, Jingyi
    Le, Hieu
    Nguyen, Vu
    Ranjan, Viresh
    Samaras, Dimitris
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 15548 - 15557
  • [38] Ordinal Zero-Shot Learning
    Huo, Zengwei
    Geng, Xin
    PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 1916 - 1922
  • [39] Zero-shot Model Diagnosis
    Luo, Jinqi
    Wang, Zhaoning
    Wu, Chen Henry
    Huang, Dong
    De la Torre, Fernando
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 11631 - 11640
  • [40] Two Semantic Information Extension Enhancement Methods For Zero-Shot Learning
    Huang, Weichen
    Ju, Xinyue
    Zhou, You
    Xu, Yipeng
    Yang, Gang
    PATTERN RECOGNITION AND COMPUTER VISION, PT V, PRCV 2024, 2025, 15035 : 511 - 525