Learning with Noisy Labels via Sparse Regularization

被引:35
|
作者
Zhou, Xiong [1 ,2 ]
Liu, Xianming [1 ,2 ]
Wang, Chenyang [1 ]
Zhai, Deming [1 ]
Jiang, Junjun [1 ,2 ]
Ji, Xiangyang [3 ]
机构
[1] Harbin Inst Technol, Harbin, Peoples R China
[2] Peng Cheng Lab, Shenzhen, Peoples R China
[3] Tsinghua Univ, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/ICCV48922.2021.00014
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning with noisy labels is an important and challenging task for training accurate deep neural networks. Some commonly-used loss functions, such as Cross Entropy (CE), suffer from severe overfitting to noisy labels. Robust loss functions that satisfy the symmetric condition were tailored to remedy this problem, which however encounter the underfitting effect. In this paper, we theoretically prove that any loss can be made robust to noisy labels by restricting the network output to the set of permutations over a fixed vector. When the fixed vector is one-hot, we only need to constrain the output to be one-hot, which however produces zero gradients almost everywhere and thus makes gradient-based optimization difficult. In this work, we introduce the sparse regularization strategy to approximate the one-hot constraint, which is composed of network output sharpening operation that enforces the output distribution of a network to be sharp and the l(p)-norm (p <= 1) regularization that promotes the network output to be sparse. This simple approach guarantees the robustness of arbitrary loss functions while not hindering the fitting ability. Experimental results demonstrate that our method can significantly improve the performance of commonly-used loss functions in the presence of noisy labels and class imbalance, and outperform the state-of-the-art methods. The code is available at https://github.com/hitcszx/lnl_sr.
引用
收藏
页码:72 / 81
页数:10
相关论文
共 50 条
  • [21] On Learning Contrastive Representations for Learning with Noisy Labels
    Yi, Li
    Liu, Sheng
    She, Qi
    McLeod, A. Ian
    Wang, Boyu
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 16661 - 16670
  • [22] Learning to rectify for robust learning with noisy labels
    Sun, Haoliang
    Guo, Chenhui
    Wei, Qi
    Han, Zhongyi
    Yin, Yilong
    PATTERN RECOGNITION, 2022, 124
  • [23] Online learning with sparse labels
    He, Wenwu
    Zou, Fumin
    Liang, Quan
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2019, 31 (23):
  • [24] Augmentation Strategies for Learning with Noisy Labels
    Nishi, Kento
    Ding, Yi
    Rich, Alex
    Hollerer, Tobias
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 8018 - 8027
  • [25] Learning with Neighbor Consistency for Noisy Labels
    Iscen, Ahmet
    Valmadre, Jack
    Arnab, Anurag
    Schmid, Cordelia
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 4662 - 4671
  • [26] Partial Label Learning with Noisy Labels
    Zhao, Pan
    Tang, Long
    Pan, Zhigeng
    Annals of Data Science, 2025, 12 (01) : 199 - 212
  • [27] To Aggregate or Not? Learning with Separate Noisy Labels
    Wei, Jiaheng
    Zhu, Zhaowei
    Luo, Tianyi
    Amid, Ehsan
    Kumar, Abhishek
    Liu, Yang
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 2523 - 2535
  • [28] DEEP LEARNING CLASSIFICATION WITH NOISY LABELS
    Sanchez, Guillaume
    Guis, Vincente
    Marxer, Ricard
    Bouchara, Frederic
    2020 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO WORKSHOPS (ICMEW), 2020,
  • [29] Robust Collaborative Learning with Noisy Labels
    Sun, Mengying
    Xing, Jing
    Chen, Bin
    Zhou, Jiayu
    20TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2020), 2020, : 1274 - 1279
  • [30] Twin Contrastive Learning with Noisy Labels
    Huang, Zhizhong
    Zhang, Junping
    Shan, Hongming
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 11661 - 11670