Distilling Virtual Examples for Long-tailed Recognition

被引:41
|
作者
He, Yin-Yin [1 ]
Wu, Jianxin [1 ]
Wei, Xiu-Shen [1 ,2 ]
机构
[1] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing, Peoples R China
[2] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing, Peoples R China
基金
中国国家自然科学基金;
关键词
SMOTE;
D O I
10.1109/ICCV48922.2021.00030
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We tackle the long-tailed visual recognition problem from the knowledge distillation perspective by proposing a Distill the Virtual Examples (DiVE) method. Specifically, by treating the predictions of a teacher model as virtual examples, we prove that distilling from these virtual examples is equivalent to label distribution learning under certain constraints. We show that when the virtual example distribution becomes flatter than the original input distribution, the under-represented tail classes will receive significant improvements, which is crucial in long-tailed recognition. The proposed DiVE method can explicitly tune the virtual example distribution to become flat. Extensive experiments on three benchmark datasets, including the large-scale iNaturalist ones, justify that the proposed DiVE method can significantly outperform state-of-the-art methods. Furthermore, additional analyses and experiments verify the virtual example interpretation, and demonstrate the effectiveness of tailored designs in DiVE for long-tailed problems.
引用
收藏
页码:235 / 244
页数:10
相关论文
共 50 条
  • [21] A dual progressive strategy for long-tailed visual recognition
    Liang, Hong
    Cao, Guoqing
    Shao, Mingwen
    Zhang, Qian
    MACHINE VISION AND APPLICATIONS, 2024, 35 (01)
  • [22] Local pseudo-attributes for long-tailed recognition
    Kim, Dong-Jin
    Ke, Tsung-Wei
    Yu, Stella X.
    PATTERN RECOGNITION LETTERS, 2023, 172 : 51 - 57
  • [23] Towards Effective Collaborative Learning in Long-Tailed Recognition
    Xu, Zhengzhuo
    Chai, Zenghao
    Xu, Chengyin
    Yuan, Chun
    Yang, Haiqin
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 3754 - 3764
  • [24] Nested Collaborative Learning for Long-Tailed Visual Recognition
    Li, Jun
    Tan, Zichang
    Wan, Jun
    Lei, Zhen
    Guo, Guodong
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 6939 - 6948
  • [25] Probabilistic Contrastive Learning for Long-Tailed Visual Recognition
    Du, Chaoqun
    Wang, Yulin
    Song, Shiji
    Huang, Gao
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (09) : 5890 - 5904
  • [26] Targeted Supervised Contrastive Learning for Long-Tailed Recognition
    Li, Tianhong
    Cao, Peng
    Yuan, Yuan
    Fan, Lijie
    Yang, Yuzhe
    Feris, Rogerio
    Indyk, Piotr
    Katabi, Dina
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 6908 - 6918
  • [27] Inverse Image Frequency for Long-Tailed Image Recognition
    Alexandridis, Konstantinos Panagiotis
    Luo, Shan
    Nguyen, Anh
    Deng, Jiankang
    Zafeiriou, Stefanos
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 5721 - 5736
  • [28] Exploring the auxiliary learning for long-tailed visual recognition
    Zhang, Junjie
    Liu, Lingqiao
    Wang, Peng
    Zhang, Jian
    NEUROCOMPUTING, 2021, 449 : 303 - 314
  • [29] Balanced self-distillation for long-tailed recognition
    Ren, Ning
    Li, Xiaosong
    Wu, Yanxia
    Fu, Yan
    KNOWLEDGE-BASED SYSTEMS, 2024, 290
  • [30] Balanced Contrastive Learning for Long-Tailed Visual Recognition
    Zhu, Jianggang
    Wang, Zheng
    Chen, Jingjing
    Chen, Yi-Ping Phoebe
    Jiang, Yu-Gang
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 6898 - 6907