Noisy Concurrent Training for Efficient Learning under Label Noise

被引:9
|
作者
Sarfraz, Fahad [1 ]
Arani, Elahe [1 ]
Zonooz, Bahram [1 ]
机构
[1] NavInfo Europe, Adv Res Lab, Eindhoven, Netherlands
关键词
D O I
10.1109/WACV48630.2021.00320
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks (DNNs) fail to learn effectively under label noise and have been shown to memorize random labels which affect their generalization performance. We consider learning in isolation, using one-hot encoded labels as the sole source of supervision, and a lack of regularization to discourage memorization as the major shortcomings of the standard training procedure. Thus, we propose Noisy Concurrent Training (NCT) which leverages collaborative learning to use the consensus between two models as an additional source of supervision. Furthermore, inspired by trial-to-trial variability in the brain, we propose a counter-intuitive regularization technique, target variability, which entails randomly changing the labels of a percentage of training samples in each batch as a deterrent to memorization and over-generalization in DNNs. Target variability is applied independently to each model to keep them diverged and avoid the confirmation bias. As DNNs tend to prioritize learning simple patterns first before memorizing the noisy labels, we employ a dynamic learning scheme whereby as the training progresses, the two models increasingly rely more on their consensus. NCT also progressively increases the target variability to avoid memorization in later stages. We demonstrate the effectiveness of our approach on both synthetic and real-world noisy benchmark datasets.
引用
收藏
页码:3158 / 3167
页数:10
相关论文
共 50 条
  • [1] Multiple Instance Learning for Training Neural Networks under Label Noise
    Duffner, Stefan
    Garcia, Christophe
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [2] Estimating Noise Transition Matrix with Label Correlations for Noisy Multi-Label Learning
    Li, Shikun
    Xia, Xiaobo
    Zhang, Hansong
    Zhan, Yibing
    Ge, Shiming
    Liu, Tongliang
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [3] On better detecting and leveraging noisy samples for learning with severe label noise
    Miao, Qing
    Wu, Xiaohe
    Xu, Chao
    Zuo, Wangmeng
    Meng, Zhaopeng
    [J]. PATTERN RECOGNITION, 2023, 136
  • [4] Ensemble Methods for Label Noise Detection Under the Noisy at Random Model
    de Moura, Kecia G.
    Prudencio, Ricardo B. C.
    Cavalcanti, George D. C.
    [J]. 2018 7TH BRAZILIAN CONFERENCE ON INTELLIGENT SYSTEMS (BRACIS), 2018, : 474 - 479
  • [5] Contrastive learning of graphs under label noise
    Li, Xianxian
    Li, Qiyu
    Li, De
    Qian, Haodong
    Wang, Jinyan
    [J]. NEURAL NETWORKS, 2024, 172
  • [6] Efficient Testable Learning of Halfspaces with Adversarial Label Noise
    Diakonikolas, Ilias
    Kane, Daniel M.
    Kontonis, Vasilis
    Liu, Sihan
    Zarifis, Nikos
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [7] Meta Label Correction for Noisy Label Learning
    Zheng, Guoqing
    Awadallah, Ahmed Hassan
    Dumais, Susan
    [J]. THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 11053 - 11061
  • [8] Is BERT Robust to Label Noise? A Study on Learning with Noisy Labels in Text Classification
    Zhu, Dawei
    Hedderich, Michael A.
    Zhai, Fangzhou
    Adelani, David Ifeoluwa
    Klakow, Dietrich
    [J]. PROCEEDINGS OF THE THIRD WORKSHOP ON INSIGHTS FROM NEGATIVE RESULTS IN NLP (INSIGHTS 2022), 2022, : 62 - 67
  • [9] Contrastive label correction for noisy label learning
    Huang, Bin
    Lin, Yaohai
    Xu, Chaoyang
    [J]. INFORMATION SCIENCES, 2022, 611 : 173 - 184
  • [10] Robust Learning of Multi-Label Classifiers under Label Noise
    Kumar, Himanshu
    Manwani, Naresh
    Sastry, P. S.
    [J]. PROCEEDINGS OF THE 7TH ACM IKDD CODS AND 25TH COMAD (CODS-COMAD 2020), 2020, : 90 - 97