Towards a Smaller Student: Capacity Dynamic Distillation for Efficient Image Retrieval

被引:10
|
作者
Xie, Yi [1 ]
Zhang, Huaidong [1 ]
Xu, Xuemiao [1 ,4 ,5 ,6 ]
Zhu, Jianqing [2 ]
He, Shengfeng [3 ]
机构
[1] South China Univ Technol, Guangzhou, Guangdong, Peoples R China
[2] Huaqiao Univ, Quanzhou, Peoples R China
[3] Singapore Management Univ, Singapore, Singapore
[4] State Key Lab Subtrop Bldg Sci, Guangzhou, Guangdong, Peoples R China
[5] Minist Educ, Key Lab Big Data & Intelligent Robot, Guangzhou, Guangdong, Peoples R China
[6] Guangdong Prov Key Lab Computat Intelligence & Cy, Guangzhou, Guangdong, Peoples R China
基金
中国国家自然科学基金;
关键词
NETWORK;
D O I
10.1109/CVPR52729.2023.01536
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Previous Knowledge Distillation based efficient image retrieval methods employ a lightweight network as the student model for fast inference. However, the lightweight student model lacks adequate representation capacity for effective knowledge imitation during the most critical early training period, causing final performance degeneration. To tackle this issue, we propose a Capacity Dynamic Distillation framework, which constructs a student model with editable representation capacity. Specifically, the employed student model is initially a heavy model to fruitfully learn distilled knowledge in the early training epochs, and the student model is gradually compressed during the training. To dynamically adjust the model capacity, our dynamic framework inserts a learnable convolutional layer within each residual block in the student model as the channel importance indicator. The indicator is optimized simultaneously by the image retrieval loss and the compression loss, and a retrieval-guided gradient resetting mechanism is proposed to release the gradient conflict. Extensive experiments show that our method has superior inference speed and accuracy, e.g., on the VeRi-776 dataset, given the ResNet101 as a teacher, our method saves 67.13% model parameters and 65.67% FLOPs without sacrificing accuracy. Code is available at https://github.com/SCY-X/Capacity_Dynamic_Distillation.
引用
收藏
页码:16006 / 16015
页数:10
相关论文
共 50 条
  • [31] An efficient multimedia image retrieval system
    Celenk, M
    VISUAL INFORMATION PROCESSING VIII, 1999, 3716 : 92 - 99
  • [32] Efficient query refinement for image retrieval
    Nastar, C
    Mitschke, M
    Meilhac, C
    1998 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, PROCEEDINGS, 1998, : 547 - 552
  • [33] DeepIndex for Accurate and Efficient Image Retrieval
    Liu, Yu
    Guo, Yanming
    Wu, Song
    Lew, Michael S.
    ICMR'15: PROCEEDINGS OF THE 2015 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, 2015, : 43 - 50
  • [34] Color indexing for efficient image retrieval
    Babu, G.Phanendra
    Mehtre, Babu M.
    Kankanhalli, Mohan S.
    Multimedia Tools and Applications, 1995, 1 (04): : 327 - 348
  • [35] Efficient and Effective Online Image Retrieval
    Edmundson, David
    Schaefer, Gerald
    PROCEEDINGS 2012 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2012, : 2312 - 2317
  • [36] Dynamic overview techniques for image retrieval
    Pu, P
    Pecenovic, Z
    DATA VISUALIZATION 2000, 2000, : 43 - +
  • [37] Teacher-student collaborative knowledge distillation for image classification
    Xu, Chuanyun
    Gao, Wenjian
    Li, Tian
    Bai, Nanlan
    Li, Gang
    Zhang, Yang
    APPLIED INTELLIGENCE, 2023, 53 (02) : 1997 - 2009
  • [38] Teacher-student collaborative knowledge distillation for image classification
    Chuanyun Xu
    Wenjian Gao
    Tian Li
    Nanlan Bai
    Gang Li
    Yang Zhang
    Applied Intelligence, 2023, 53 : 1997 - 2009
  • [39] Dynamic Segmentation for Efficient Retrieval of Podcasts
    Repp, Stephan
    Haffner, Ernst Georg
    PROCEEDINGS OF THE 4TH ANNUAL ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2024, 2024, : 29 - 36
  • [40] Towards Comparable Knowledge Distillation in Semantic Image Segmentation
    Niemann, Onno
    Vox, Christopher
    Werner, Thorben
    MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2023, PT IV, 2025, 2136 : 185 - 200