Distilling Virtual Examples for Long-tailed Recognition

被引:41
|
作者
He, Yin-Yin [1 ]
Wu, Jianxin [1 ]
Wei, Xiu-Shen [1 ,2 ]
机构
[1] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing, Peoples R China
[2] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing, Peoples R China
基金
中国国家自然科学基金;
关键词
SMOTE;
D O I
10.1109/ICCV48922.2021.00030
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We tackle the long-tailed visual recognition problem from the knowledge distillation perspective by proposing a Distill the Virtual Examples (DiVE) method. Specifically, by treating the predictions of a teacher model as virtual examples, we prove that distilling from these virtual examples is equivalent to label distribution learning under certain constraints. We show that when the virtual example distribution becomes flatter than the original input distribution, the under-represented tail classes will receive significant improvements, which is crucial in long-tailed recognition. The proposed DiVE method can explicitly tune the virtual example distribution to become flat. Extensive experiments on three benchmark datasets, including the large-scale iNaturalist ones, justify that the proposed DiVE method can significantly outperform state-of-the-art methods. Furthermore, additional analyses and experiments verify the virtual example interpretation, and demonstrate the effectiveness of tailored designs in DiVE for long-tailed problems.
引用
收藏
页码:235 / 244
页数:10
相关论文
共 50 条
  • [1] Mutual Learning for Long-Tailed Recognition
    Park, Changhwa
    Yim, Junho
    Jun, Eunji
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 2674 - 2683
  • [2] A Survey on Long-Tailed Visual Recognition
    Yang, Lu
    Jiang, He
    Song, Qing
    Guo, Jun
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2022, 130 (07) : 1837 - 1872
  • [3] Multimodal Framework for Long-Tailed Recognition
    Chen, Jian
    Zhao, Jianyin
    Gu, Jiaojiao
    Qin, Yufeng
    Ji, Hong
    Applied Sciences (Switzerland), 2024, 14 (22):
  • [4] Improving Calibration for Long-Tailed Recognition
    Zhong, Zhisheng
    Cui, Jiequan
    Liu, Shu
    Jia, Jiaya
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 16484 - 16493
  • [5] A Survey on Long-Tailed Visual Recognition
    Lu Yang
    He Jiang
    Qing Song
    Jun Guo
    International Journal of Computer Vision, 2022, 130 : 1837 - 1872
  • [6] Learning Prototype Classifiers for Long-Tailed Recognition
    Sharma, Saurabh
    Xian, Yongqin
    Yu, Ning
    Singh, Ambuj
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 1360 - 1368
  • [7] ResLT: Residual Learning for Long-Tailed Recognition
    Cui, Jiequan
    Liu, Shu
    Tian, Zhuotao
    Zhong, Zhisheng
    Jia, Jiaya
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (03) : 3695 - 3706
  • [8] Long-Tailed Recognition via Weight Balancing
    Alshammari, Shaden
    Wang, Yu-Xiong
    Ramanan, Deva
    Kong, Shu
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 6887 - 6897
  • [9] Decoupled Contrastive Learning for Long-Tailed Recognition
    Xuan, Shiyu
    Zhang, Shiliang
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 6, 2024, : 6396 - 6403
  • [10] Decoupled Optimisation for Long-Tailed Visual Recognition
    Cong, Cong
    Xuan, Shiyu
    Liu, Sidong
    Zhang, Shiliang
    Pagnucco, Maurice
    Song, Yang
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 2, 2024, : 1380 - 1388