Balanced Product of Calibrated Experts for Long-Tailed Recognition

被引:16
|
作者
Aimar, Emanuel Sanchez [1 ]
Jonnarth, Arvi [1 ,3 ]
Felsberg, Michael [1 ,4 ]
Kuhlmann, Marco [2 ]
机构
[1] Linkoping Univ, Dept Elect Engn, Linkoping, Sweden
[2] Linkoping Univ, Dept Comp & Informat Sci, Linkoping, Sweden
[3] Husqvarna Grp, Huskvarna, Sweden
[4] Univ KwaZulu Natal, Durban, South Africa
基金
瑞典研究理事会;
关键词
MIXTURES;
D O I
10.1109/CVPR52729.2023.01912
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Many real-world recognition problems are characterized by long-tailed label distributions. These distributions make representation learning highly challenging due to limited generalization over the tail classes. If the test distribution differs from the training distribution, e.g. uniform versus long-tailed, the problem of the distribution shift needs to be addressed. A recent line of work proposes learning multiple diverse experts to tackle this issue. Ensemble diversity is encouraged by various techniques, e.g. by specializing different experts in the head and the tail classes. In this work, we take an analytical approach and extend the notion of logit adjustment to ensembles to form a Balanced Product of Experts (BalPoE). BalPoE combines a family of experts with different test-time target distributions, generalizing several previous approaches. We show how to properly define these distributions and combine the experts in order to achieve unbiased predictions, by proving that the ensemble is Fisher-consistent for minimizing the balanced error. Our theoretical analysis shows that our balanced ensemble requires calibrated experts, which we achieve in practice using mixup. We conduct extensive experiments and our method obtains new state-of-the-art results on three long-tailed datasets: CIFAR-100-LT, ImageNet-LT, and iNaturalist-2018. Our code is available at https://github.com/emasa/BalPoE-CalibratedLT.
引用
收藏
页码:19967 / 19977
页数:11
相关论文
共 50 条
  • [21] Easy balanced mixing for long-tailed data
    Zhu, Zonghai
    Xing, Huanlai
    Xu, Yuge
    KNOWLEDGE-BASED SYSTEMS, 2022, 248
  • [22] Balanced knowledge distillation for long-tailed learning
    Zhang, Shaoyu
    Chen, Chen
    Hu, Xiyuan
    Peng, Silong
    NEUROCOMPUTING, 2023, 527 : 36 - 46
  • [23] Mutual Learning for Long-Tailed Recognition
    Park, Changhwa
    Yim, Junho
    Jun, Eunji
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 2674 - 2683
  • [24] MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition
    Zhao, Qihao
    Jiang, Chen
    Hu, Wei
    Zhang, Fan
    Liu, Jun
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 11563 - 11574
  • [25] ACE: Ally Complementary Experts for Solving Long-Tailed Recognition in One-Shot
    Cai, Jiarui
    Wang, Yizhou
    Hwang, Jenq-Neng
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 112 - 121
  • [26] A Survey on Long-Tailed Visual Recognition
    Yang, Lu
    Jiang, He
    Song, Qing
    Guo, Jun
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2022, 130 (07) : 1837 - 1872
  • [27] Multimodal Framework for Long-Tailed Recognition
    Chen, Jian
    Zhao, Jianyin
    Gu, Jiaojiao
    Qin, Yufeng
    Ji, Hong
    APPLIED SCIENCES-BASEL, 2024, 14 (22):
  • [28] Improving Calibration for Long-Tailed Recognition
    Zhong, Zhisheng
    Cui, Jiequan
    Liu, Shu
    Jia, Jiaya
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 16484 - 16493
  • [29] A Survey on Long-Tailed Visual Recognition
    Lu Yang
    He Jiang
    Qing Song
    Jun Guo
    International Journal of Computer Vision, 2022, 130 : 1837 - 1872
  • [30] Multiple Contrastive Experts for long-tailed image classification
    Wang, Yandan
    Sun, Kaiyin
    Guo, Chenqi
    Zhong, Shiwei
    Liu, Huili
    Ma, Yinglong
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 255