ITERATIVE SELF KNOWLEDGE DISTILLATION - FROM POTHOLE CLASSIFICATION TO FINE-GRAINED AND COVID RECOGNITION

被引:1
|
作者
Peng, Kuan-Chuan [1 ]
机构
[1] Mitsubishi Elect Res Labs MERL, Cambridge, MA 02139 USA
关键词
Teacher-free knowledge distillation; iterative self knowledge distillation;
D O I
10.1109/ICASSP43922.2022.9746470
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Pothole classification has become an important task for road inspection vehicles to save drivers from potential car accidents and repair bills. Given the limited computational power and fixed number of training epochs, we propose iterative self knowledge distillation (ISKD) to train lightweight pothole classifiers. Designed to improve both the teacher and student models over time in knowledge distillation, ISKD outperforms the state-of-the-art self knowledge distillation method on three pothole classification datasets across four lightweight network architectures, which supports that self knowledge distillation should be done iteratively instead of just once. The accuracy relation between the teacher and student models shows that the student model can still benefit from a moderately trained teacher model. Implying that better teacher models generally produce better student models, our results justify the design of ISKD. In addition to pothole classification, we also demonstrate the efficacy of ISKD on six additional datasets associated with generic classification, fine-grained classification, and medical imaging application, which supports that ISKD can serve as a general-purpose performance booster without the need of a given teacher model and extra trainable parameters.
引用
收藏
页码:3139 / 3143
页数:5
相关论文
共 50 条
  • [21] Leveraging Fine-Grained Labels to Regularize Fine-Grained Visual Classification
    Wu, Junfeng
    Yao, Li
    Liu, Bin
    Ding, Zheyuan
    PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON COMPUTER MODELING AND SIMULATION (ICCMS 2019) AND 8TH INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING AND APPLICATIONS (ICICA 2019), 2019, : 133 - 136
  • [22] Efficient Fine-Grained Object Recognition in High-Resolution Remote Sensing Images From Knowledge Distillation to Filter Grafting
    Wang, Liuqian
    Zhang, Jing
    Tian, Jimiao
    Li, Jiafeng
    Zhuo, Li
    Tian, Qi
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [23] Dynamic semantic structure distillation for low-resolution fine-grained recognition
    Liang, Mingjiang
    Huang, Shaoli
    Liu, Wei
    PATTERN RECOGNITION, 2024, 148
  • [24] Multi-level knowledge distillation for fine-grained fashion image retrieval
    Xiao, Ling
    Yamasaki, Toshihiko
    KNOWLEDGE-BASED SYSTEMS, 2025, 310
  • [25] Multiview attention networks for fine-grained watershed categorization via knowledge distillation
    Gong, Huimin
    Zhang, Cheng
    Teng, Jinlin
    Liu, Chunqing
    PLOS ONE, 2025, 20 (01):
  • [26] Fine-Grained Visual Classification using Self Assessment Classifier
    Do, Tuong
    Trani, Huy
    Tjiputra, Erman
    Tran, Quang D.
    Anh Nguyen
    2024 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI 2024, 2024, : 597 - 602
  • [27] Emotion knowledge-based fine-grained facial expression recognition
    Zhu, Jiacheng
    Ding, Yu
    Liu, Hanwei
    Chen, Keyu
    Lin, Zhanpeng
    Hong, Wenxing
    NEUROCOMPUTING, 2024, 610
  • [28] Enhancing Retail Product Recognition: Fine-Grained Bottle Size Classification
    Tolja, Katarina
    Subasic, Marko
    Kalafatic, Zoran
    Loncaric, Sven
    2023 18TH INTERNATIONAL CONFERENCE ON MACHINE VISION AND APPLICATIONS, MVA, 2023,
  • [29] Bidirectional Attention-Recognition Model for Fine-Grained Object Classification
    Liu, Chuanbin
    Xie, Hongtao
    Zha, Zhengjun
    Yu, Lingyun
    Chen, Zhineng
    Zhang, Yongdong
    IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (07) : 1785 - 1795
  • [30] Knowledge-Embedded Representation Learning for Fine-Grained Image Recognition
    Chen, Tianshui
    Lin, Liang
    Chen, Riquan
    Wu, Yang
    Luo, Xiaonan
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 627 - 634