Global and Local Mixture Consistency Cumulative Learning for Long-tailed Visual Recognitions

被引:26
|
作者
Du, Fei [1 ,2 ,3 ]
Yang, Peng [1 ,3 ]
Jia, Qi [1 ,3 ]
Nan, Fengtao [1 ,2 ,3 ]
Chen, Xiaoting [1 ,3 ]
Yang, Yun [1 ,3 ]
机构
[1] Yunnan Univ, Natl Pilot Sch Software, Kunming, Peoples R China
[2] Yunnan Univ, Sch Informat Sci & Engn, Kunming, Peoples R China
[3] Yunnan Key Lab Software Engn, Kunming, Peoples R China
关键词
D O I
10.1109/CVPR52729.2023.01518
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, our goal is to design a simple learning paradigm for long-tail visual recognition, which not only improves the robustness of the feature extractor but also alleviates the bias of the classifier towards head classes while reducing the training skills and overhead. We propose an efficient one-stage training strategy for long-tailed visual recognition called Global and Local Mixture Consistency cumulative learning (GLMC). Our core ideas are twofold: (1) a global and local mixture consistency loss improves the robustness of the feature extractor. Specifically, we generate two augmented batches by the global MixUp and local CutMix from the same batch data, respectively, and then use cosine similarity to minimize the difference. (2) A cumulative head-tail soft label reweighted loss mitigates the head class bias problem. We use empirical class frequencies to reweight the mixed label of the head-tail class for long-tailed data and then balance the conventional loss and the rebalanced loss with a coefficient accumulated by epochs. Our approach achieves state-of-the-art accuracy on CIFAR10-LT, CIFAR100-LT, and ImageNet-LT datasets. Additional experiments on balanced ImageNet and CIFAR demonstrate that GLMC can significantly improve the generalization of backbones. Code is made publicly available at https://github.com/ynu-yangpeng/GLMC.
引用
收藏
页码:15814 / 15823
页数:10
相关论文
共 50 条
  • [21] bt-vMF Contrastive and Collaborative Learning for Long-Tailed Visual Recognition
    Du, Jinhao
    Luo, Guibo
    Zhu, Yuesheng
    Bai, Zhiqiang
    2023 IEEE 35TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2023, : 573 - 577
  • [22] NCL plus plus : Nested Collaborative Learning for long-tailed visual recognition
    Tan, Zichang
    Li, Jun
    Du, Jinhao
    Wan, Jun
    Lei, Zhen
    Guo, Guodong
    PATTERN RECOGNITION, 2024, 147
  • [23] Dynamic collaborative learning with heterogeneous knowledge transfer for long-tailed visual recognition
    Zhou, Hao
    Luo, Tingjin
    He, Yongming
    INFORMATION FUSION, 2025, 115
  • [24] Towards Realistic Long-Tailed Semi-Supervised Learning: Consistency Is All You Need
    Wei, Tong
    Gan, Kai
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 3469 - 3478
  • [25] Center-Wise Feature Consistency Learning for Long-Tailed Remote Sensing Object Recognition
    Zhao, Wenda
    Zhang, Zhepu
    Liu, Jiani
    Liu, Yu
    He, You
    Lu, Huchuan
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62 : 1 - 11
  • [26] Curricular-balanced long-tailed learning
    Xiang, Xiang
    Zhang, Zihan
    Chen, Xilin
    NEUROCOMPUTING, 2024, 571
  • [27] Learning Prototype Classifiers for Long-Tailed Recognition
    Sharma, Saurabh
    Xian, Yongqin
    Yu, Ning
    Singh, Ambuj
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 1360 - 1368
  • [28] ResLT: Residual Learning for Long-Tailed Recognition
    Cui, Jiequan
    Liu, Shu
    Tian, Zhuotao
    Zhong, Zhisheng
    Jia, Jiaya
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (03) : 3695 - 3706
  • [29] Decoupled Contrastive Learning for Long-Tailed Recognition
    Xuan, Shiyu
    Zhang, Shiliang
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 6, 2024, : 6396 - 6403
  • [30] Balanced knowledge distillation for long-tailed learning
    Zhang, Shaoyu
    Chen, Chen
    Hu, Xiyuan
    Peng, Silong
    NEUROCOMPUTING, 2023, 527 : 36 - 46