Learning with Noisy labels via Self-supervised Adversarial Noisy Masking

被引:6
|
作者
Tu, Yuanpeng [1 ]
Zhang, Boshen [2 ]
Li, Yuxi [2 ]
Liu, Liang [2 ]
Li, Jian [2 ]
Zhang, Jiangning [2 ]
Wang, Yabiao [2 ]
Wang, Chengjie [2 ,3 ]
Zhao, Cai Rong [1 ]
机构
[1] Tongji Univ, Dept Elect & Informat Engn, Shanghai, Peoples R China
[2] Tencent, YouTu Lab, Shanghai, Peoples R China
[3] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
关键词
CLASSIFICATION;
D O I
10.1109/CVPR52729.2023.01553
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Collecting large-scale datasets is crucial for training deep models, annotating the data, however, inevitably yields noisy labels, which poses challenges to deep learning algorithms. Previous efforts tend to mitigate this problem via identifying and removing noisy samples or correcting their labels according to the statistical properties (e.g., loss values) among training samples. In this paper, we aim to tackle this problem from a new perspective, delving into the deep feature maps, we empirically find that models trained with clean and mislabeled samples manifest distinguishable activation feature distributions. From this observation, a novel robust training approach termed adversarial noisy masking is proposed. The idea is to regularize deep features with a label quality guided masking scheme, which adaptively modulates the input data and label simultaneously, preventing the model to overfit noisy samples. Further, an auxiliary task is designed to reconstruct input data, it naturally provides noise-free self-supervised signals to reinforce the generalization ability of models. The proposed method is simple yet effective, it is tested on synthetic and real-world noisy datasets, where significant improvements are obtained over previous methods. Code is available at https://github.com/yuanpengtu/SANM.
引用
收藏
页码:16186 / 16195
页数:10
相关论文
共 50 条
  • [21] Graph Adversarial Self-Supervised Learning
    Yang, Longqi
    Zhang, Liangliang
    Yang, Wenjing
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [22] Adversarial Self-Supervised Contrastive Learning
    Kim, Minseon
    Tack, Jihoon
    Hwang, Sung Ju
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020), 2020, 33
  • [23] Self-Supervised Adversarial Imitation Learning
    Monteiro, Juarez
    Gavenski, Nathan
    Meneguzzi, Felipe
    Barros, Rodrigo C.
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [24] Self-Supervised Noisy Label Learning for Source-Free Unsupervised Domain Adaptation
    Chen, Weijie
    Lin, Luojun
    Yang, Shicai
    Xie, Di
    Pu, Shiliang
    Zhuang, Yueting
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 10185 - 10192
  • [25] Noisy Agents: Self-supervised Exploration by Predicting Auditory Events
    Gan, Chuang
    Chen, Xiaoyu
    Isola, Phillip
    Torralba, Antonio
    Tenenbaum, Joshua B.
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 9259 - 9265
  • [26] Evolved Part Masking for Self-Supervised Learning
    Feng, Zhanzhou
    Zhang, Shiliang
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 10386 - 10395
  • [27] Learning With Noisy Labels via Self-Reweighting From Class Centroids
    Ma, Fan
    Wu, Yu
    Yu, Xin
    Yang, Yi
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (11) : 6275 - 6285
  • [28] Deep Self-Learning From Noisy Labels
    Han, Jiangfan
    Luo, Ping
    Wang, Xiaogang
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 5137 - 5146
  • [29] Robust Semi-Supervised Classification for Noisy Labels Based on Self-Paced Learning
    Gu, Nannan
    Fan, Mingyu
    Meng, Deyu
    IEEE SIGNAL PROCESSING LETTERS, 2016, 23 (12) : 1806 - 1810
  • [30] Learning and Mining with Noisy Labels
    Sugiyama, Masashi
    Liu, Tongliang
    Han, Bo
    Liu, Yang
    Niu, Gang
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 5152 - 5155