ACE: Ally Complementary Experts for Solving Long-Tailed Recognition in One-Shot

被引:89
|
作者
Cai, Jiarui [1 ]
Wang, Yizhou [1 ]
Hwang, Jenq-Neng [1 ]
机构
[1] Univ Washington, Seattle, WA 98195 USA
关键词
D O I
10.1109/ICCV48922.2021.00018
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
One-stage long-tailed recognition methods improve the overall performance in a "seesaw" manner, i.e., either sacrifice the head's accuracy for better tail classification or elevate the head's accuracy even higher but ignore the tail. Existing algorithms bypass such trade-off by a multi-stage training process: pre-training on imbalanced set and fine-tuning on balanced set. Though achieving promising performance, not only are they sensitive to the generalizability of the pre-trained model, but also not easily integrated into other computer vision tasks like detection and segmentation, where pre-training of classifiers solely is not applicable. In this paper, we propose a one-stage long-tailed recognition scheme, ally complementary experts (ACE), where the expert is the most knowledgeable specialist in a subset that dominates its training, and is complementary to other experts in the less-seen categories without being disturbed by what it has never seen. We design a distribution-adaptive optimizer to adjust the learning pace of each expert to avoid over-fitting. Without special bells and whistles, the vanilla ACE outperforms the current one-stage SOTA method by 3 similar to 10% on CIFAR10-LT, CIFAR100-LT, ImageNet-LT and iNaturalist datasets. It is also shown to be the first one to break the "seesaw" trade-off by improving the accuracy of the majority and minority categories simultaneously in only one stage. Code and trained models are at https://github.com/jrcai/ACE.
引用
收藏
页码:112 / 121
页数:10
相关论文
共 50 条
  • [41] A dual progressive strategy for long-tailed visual recognition
    Liang, Hong
    Cao, Guoqing
    Shao, Mingwen
    Zhang, Qian
    MACHINE VISION AND APPLICATIONS, 2024, 35 (01)
  • [42] Local pseudo-attributes for long-tailed recognition
    Kim, Dong-Jin
    Ke, Tsung-Wei
    Yu, Stella X.
    PATTERN RECOGNITION LETTERS, 2023, 172 : 51 - 57
  • [43] Towards Effective Collaborative Learning in Long-Tailed Recognition
    Xu, Zhengzhuo
    Chai, Zenghao
    Xu, Chengyin
    Yuan, Chun
    Yang, Haiqin
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 3754 - 3764
  • [44] Nested Collaborative Learning for Long-Tailed Visual Recognition
    Li, Jun
    Tan, Zichang
    Wan, Jun
    Lei, Zhen
    Guo, Guodong
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 6939 - 6948
  • [45] Probabilistic Contrastive Learning for Long-Tailed Visual Recognition
    Du, Chaoqun
    Wang, Yulin
    Song, Shiji
    Huang, Gao
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (09) : 5890 - 5904
  • [46] Beyond the Label Distribution Prior for Long-Tailed Recognition
    Li, Ming
    Cao, Liujuan
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, ICIC 2023, PT IV, 2023, 14089 : 792 - 803
  • [47] Balanced self-distillation for long-tailed recognition
    Ren, Ning
    Li, Xiaosong
    Wu, Yanxia
    Fu, Yan
    KNOWLEDGE-BASED SYSTEMS, 2024, 290
  • [48] Self Supervision to Distillation for Long-Tailed Visual Recognition
    Li, Tianhao
    Wang, Limin
    Wu, Gangshan
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 610 - 619
  • [49] Balanced Contrastive Learning for Long-Tailed Visual Recognition
    Zhu, Jianggang
    Wang, Zheng
    Chen, Jingjing
    Chen, Yi-Ping Phoebe
    Jiang, Yu-Gang
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 6898 - 6907
  • [50] Mixture-of-Experts Learner for Single Long-Tailed Domain Generalization
    Wang, Mengzhu
    Yuan, Jianlong
    Wang, Zhibin
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 290 - 299