Learning with Noisy Labels via Sparse Regularization

被引:35
|
作者
Zhou, Xiong [1 ,2 ]
Liu, Xianming [1 ,2 ]
Wang, Chenyang [1 ]
Zhai, Deming [1 ]
Jiang, Junjun [1 ,2 ]
Ji, Xiangyang [3 ]
机构
[1] Harbin Inst Technol, Harbin, Peoples R China
[2] Peng Cheng Lab, Shenzhen, Peoples R China
[3] Tsinghua Univ, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/ICCV48922.2021.00014
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning with noisy labels is an important and challenging task for training accurate deep neural networks. Some commonly-used loss functions, such as Cross Entropy (CE), suffer from severe overfitting to noisy labels. Robust loss functions that satisfy the symmetric condition were tailored to remedy this problem, which however encounter the underfitting effect. In this paper, we theoretically prove that any loss can be made robust to noisy labels by restricting the network output to the set of permutations over a fixed vector. When the fixed vector is one-hot, we only need to constrain the output to be one-hot, which however produces zero gradients almost everywhere and thus makes gradient-based optimization difficult. In this work, we introduce the sparse regularization strategy to approximate the one-hot constraint, which is composed of network output sharpening operation that enforces the output distribution of a network to be sharp and the l(p)-norm (p <= 1) regularization that promotes the network output to be sparse. This simple approach guarantees the robustness of arbitrary loss functions while not hindering the fitting ability. Experimental results demonstrate that our method can significantly improve the performance of commonly-used loss functions in the presence of noisy labels and class imbalance, and outperform the state-of-the-art methods. The code is available at https://github.com/hitcszx/lnl_sr.
引用
收藏
页码:72 / 81
页数:10
相关论文
共 50 条
  • [31] Iterative Cross Learning on Noisy Labels
    Yuan, Bodi
    Chen, Jianyu
    Zhang, Weidong
    Tai, Hung-Shuo
    McMains, Sara
    2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2018), 2018, : 757 - 765
  • [32] Robust Federated Learning With Noisy Labels
    Yang, Seunghan
    Park, Hyoungseob
    Byun, Junyoung
    Kim, Changick
    IEEE INTELLIGENT SYSTEMS, 2022, 37 (02) : 35 - 43
  • [33] NLNL: Negative Learning for Noisy Labels
    Kim, Youngdong
    Yim, Junho
    Yun, Juseung
    Kim, Junmo
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 101 - 110
  • [34] Compressing Features for Learning With Noisy Labels
    Chen, Yingyi
    Hu, Shell Xu
    Shen, Xi
    Ai, Chunrong
    Suykens, Johan A. K.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (02) : 2124 - 2138
  • [35] FINE Samples for Learning with Noisy Labels
    Kim, Taehyeon
    Ko, Jongwoo
    Cho, Sangwook
    Choi, Jinhwan
    Yun, Se-Young
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [36] Learning from Noisy Labels with Distillation
    Li, Yuncheng
    Yang, Jianchao
    Song, Yale
    Cao, Liangliang
    Luo, Jiebo
    Li, Li-Jia
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 1928 - 1936
  • [37] Progressive Stochastic Learning for Noisy Labels
    Han, Bo
    Tsang, Ivor W.
    Chen, Ling
    Yu, Celina P.
    Fung, Sai-Fu
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (10) : 5136 - 5148
  • [38] Label Distribution for Learning with Noisy Labels
    Liu, Yun-Peng
    Xu, Ning
    Zhang, Yu
    Geng, Xin
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 2568 - 2574
  • [39] Sparse regularization via bidualization
    Beck, Amir
    Refael, Yehonathan
    JOURNAL OF GLOBAL OPTIMIZATION, 2022, 82 (03) : 463 - 482
  • [40] Sparse regularization via bidualization
    Amir Beck
    Yehonathan Refael
    Journal of Global Optimization, 2022, 82 : 463 - 482