Spatial Consistency Loss for Training Multi-Label Classifiers from Single-Label Annotations

被引:0
|
作者
Verelst, Thomas [1 ,2 ]
Rubenstein, Paul K. [2 ]
Eichner, Marcin [2 ]
Tuytelaars, Tinne [1 ]
Berman, Maxim [2 ]
机构
[1] Katholieke Univ Leuven, ESAT PSI, Leuven, Belgium
[2] Apple, Cupertino, CA USA
关键词
D O I
10.1109/WACV56688.2023.00387
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-label image classification is more applicable "in the wild" than single-label classification, as natural images usually contain multiple objects. However, exhaustively annotating images with every object of interest is costly and time-consuming. We train multi-label classifiers from datasets where each image is annotated with a single positive label only. As the presence of all other classes is unknown, we propose an Expected Negative loss that builds a set of expected negative labels in addition to the annotated positives. This set is determined based on prediction consistency, by averaging predictions over consecutive training epochs to build robust targets. Moreover, the 'crop' data augmentation leads to additional label noise by cropping out the single annotated object. Our novel spatial consistency loss improves supervision and ensures consistency of the spatial feature maps by maintaining per-class running-average heatmaps for each training image. We use MS-COCO, Pascal VOC, NUS-WIDE and CUB-Birds datasets to demonstrate the gains of the Expected Negative loss in combination with consistency and spatial consistency losses. We also demonstrate improved multi-label classification mAP on ImageNet-1K using the ReaL multi-label validation set.
引用
收藏
页码:3868 / 3878
页数:11
相关论文
共 50 条
  • [1] Single-label and multi-label conceptor classifiers in pre-trained neural networks
    Qian, Guangwu
    Zhang, Lei
    Wang, Yan
    [J]. NEURAL COMPUTING & APPLICATIONS, 2019, 31 (10): : 6179 - 6188
  • [2] Single-label and multi-label conceptor classifiers in pre-trained neural networks
    Guangwu Qian
    Lei Zhang
    Yan Wang
    [J]. Neural Computing and Applications, 2019, 31 : 6179 - 6188
  • [3] Deep Hash Learning of Feature-Invariant Representation for Single-Label and Multi-label Retrieval
    Cao, Yuan
    Shang, Xinzheng
    Liu, Junwei
    Qian, Chengzhi
    Chen, Sheng
    [J]. ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2023, PT I, 2024, 14487 : 17 - 29
  • [4] Multi-dimensional multi-label classification: Towards encompassing heterogeneous label spaces and multi-label annotations
    Jia, Bin -Bin
    Zhang, Min -Ling
    [J]. PATTERN RECOGNITION, 2023, 138
  • [5] On the consistency of multi-label learning
    Gao, Wei
    Zhou, Zhi-Hua
    [J]. ARTIFICIAL INTELLIGENCE, 2013, 199 : 22 - 44
  • [6] Study of data transformation techniques for adapting single-label prototype selection algorithms to multi-label learning
    Arnaiz-Gonzalez, Alvar
    Diez-Pastor, Jose-Francisco
    Rodriguez, Juan J.
    Garcia-Osorio, Cesar
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2018, 109 : 114 - 130
  • [7] Threshold optimisation for multi-label classifiers
    Pillai, Ignazio
    Fumera, Giorgio
    Roli, Fabio
    [J]. PATTERN RECOGNITION, 2013, 46 (07) : 2055 - 2065
  • [8] Measure Optimisation in Multi-label Classifiers
    Pillai, Ignazio
    Fumera, Giorgio
    Roli, Fabio
    [J]. 2012 21ST INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR 2012), 2012, : 2424 - 2427
  • [9] MULTI-LABEL VS. COMBINED SINGLE-LABEL SOUND EVENT DETECTION WITH DEEP NEURAL NETWORKS
    Cakir, Emre
    Heittola, Toni
    Huttunen, Heikki
    Virtanen, Tuomas
    [J]. 2015 23RD EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2015, : 2551 - 2555
  • [10] Robust Learning of Multi-Label Classifiers under Label Noise
    Kumar, Himanshu
    Manwani, Naresh
    Sastry, P. S.
    [J]. PROCEEDINGS OF THE 7TH ACM IKDD CODS AND 25TH COMAD (CODS-COMAD 2020), 2020, : 90 - 97