Towards a Smaller Student: Capacity Dynamic Distillation for Efficient Image Retrieval

被引:10
|
作者
Xie, Yi [1 ]
Zhang, Huaidong [1 ]
Xu, Xuemiao [1 ,4 ,5 ,6 ]
Zhu, Jianqing [2 ]
He, Shengfeng [3 ]
机构
[1] South China Univ Technol, Guangzhou, Guangdong, Peoples R China
[2] Huaqiao Univ, Quanzhou, Peoples R China
[3] Singapore Management Univ, Singapore, Singapore
[4] State Key Lab Subtrop Bldg Sci, Guangzhou, Guangdong, Peoples R China
[5] Minist Educ, Key Lab Big Data & Intelligent Robot, Guangzhou, Guangdong, Peoples R China
[6] Guangdong Prov Key Lab Computat Intelligence & Cy, Guangzhou, Guangdong, Peoples R China
基金
中国国家自然科学基金;
关键词
NETWORK;
D O I
10.1109/CVPR52729.2023.01536
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Previous Knowledge Distillation based efficient image retrieval methods employ a lightweight network as the student model for fast inference. However, the lightweight student model lacks adequate representation capacity for effective knowledge imitation during the most critical early training period, causing final performance degeneration. To tackle this issue, we propose a Capacity Dynamic Distillation framework, which constructs a student model with editable representation capacity. Specifically, the employed student model is initially a heavy model to fruitfully learn distilled knowledge in the early training epochs, and the student model is gradually compressed during the training. To dynamically adjust the model capacity, our dynamic framework inserts a learnable convolutional layer within each residual block in the student model as the channel importance indicator. The indicator is optimized simultaneously by the image retrieval loss and the compression loss, and a retrieval-guided gradient resetting mechanism is proposed to release the gradient conflict. Extensive experiments show that our method has superior inference speed and accuracy, e.g., on the VeRi-776 dataset, given the ResNet101 as a teacher, our method saves 67.13% model parameters and 65.67% FLOPs without sacrificing accuracy. Code is available at https://github.com/SCY-X/Capacity_Dynamic_Distillation.
引用
收藏
页码:16006 / 16015
页数:10
相关论文
共 50 条
  • [1] Dynamic Contrastive Distillation for Image-Text Retrieval
    Rao, Jun
    Ding, Liang
    Qi, Shuhan
    Fang, Meng
    Liu, Yang
    Shen, Li
    Tao, Dacheng
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 8383 - 8395
  • [2] Towards Efficient for Learning Model Image Retrieval
    Ghrabat, Mudhafar Jalil Jassim
    Ma, Guangzhi
    Cheng, Chih
    2018 14TH INTERNATIONAL CONFERENCE ON SEMANTICS, KNOWLEDGE AND GRIDS (SKG), 2018, : 92 - 99
  • [3] Wavelet Knowledge Distillation: Towards Efficient Image-to-Image Translation
    Zhang, Linfeng
    Chen, Xin
    Tu, Xiaobing
    Wan, Pengfei
    Xu, Ning
    Ma, Kaisheng
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 12454 - 12464
  • [4] Towards efficient image retrieval based on multiple features
    Ooi, BC
    Shen, HT
    Xia, CY
    ICICS-PCM 2003, VOLS 1-3, PROCEEDINGS, 2003, : 180 - 185
  • [5] Deep Hash Distillation for Image Retrieval
    Jang, Young Kyun
    Gu, Geonmo
    Ko, Byungsoo
    Kang, Isaac
    Cho, Nam Ik
    COMPUTER VISION - ECCV 2022, PT XIV, 2022, 13674 : 354 - 371
  • [6] Deep Hash Distillation for Image Retrieval
    Jang, Young Kyun
    Gu, Geonmo
    Ko, Byungsoo
    Kang, Isaac
    Cho, Nam Ik
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2022, 13674 LNCS : 354 - 371
  • [7] Efficient dynamic image retrieval using the a trous wavelet transformation
    Joubert, GR
    Kao, O
    ADVANCES IN MUTLIMEDIA INFORMATION PROCESSING - PCM 2001, PROCEEDINGS, 2001, 2195 : 343 - 350
  • [8] Cyclic distillation - towards energy efficient binary distillation
    Kiss, Anton A.
    Landaeta, Servando J. Flores
    Zondervan, Edwin
    22 EUROPEAN SYMPOSIUM ON COMPUTER AIDED PROCESS ENGINEERING, 2012, 30 : 697 - 701
  • [9] Contextual Similarity Distillation for Asymmetric Image Retrieval
    Wu, Hui
    Wang, Min
    Zhou, Wengang
    Li, Houqiang
    Tian, Qi
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 9479 - 9488
  • [10] Unambiguous granularity distillation for asymmetric image retrieval
    Zhang, Hongrui
    Xie, Yi
    Zhang, Haoquan
    Xu, Cheng
    Luo, Xuandi
    Chen, Donglei
    Xu, Xuemiao
    Zhang, Huaidong
    Heng, Pheng Ann
    He, Shengfeng
    NEURAL NETWORKS, 2025, 187