Nrat: towards adversarial training with inherent label noise

被引:0
|
作者
Zhen Chen
Fu Wang
Ronghui Mu
Peipei Xu
Xiaowei Huang
Wenjie Ruan
机构
[1] Liverpool University,Department of Computer Science
[2] University of Exeter,Department of Computer Science
[3] Lancaster University,Department of Computer Science
来源
Machine Learning | 2024年 / 113卷
关键词
Adversarial training; Robust loss functions; Noisy labels;
D O I
暂无
中图分类号
学科分类号
摘要
Adversarial training (AT) has been widely recognized as the most effective defense approach against adversarial attacks on deep neural networks and it is formulated as a min-max optimization. Most AT algorithms are geared towards research-oriented datasets such as MNIST, CIFAR10, etc., where the labels are generally correct. However, noisy labels, e.g., mislabelling, are inevitable in real-world datasets. In this paper, we investigate AT with inherent label noise, where the training dataset itself contains mislabeled samples. We first empirically show that the performance of AT typically degrades as the label noise rate increases. Then, we propose a Noisy-Robust Adversarial Training (NRAT) algorithm, which leverages the recent advancements in learning with noisy labels to enhance the performance of AT in the presence of label noise. For experimental comparison, we consider two essential metrics in AT: (i) trade-off between natural and robust accuracy; (ii) robust overfitting. Our experiments show that NRAT’s performance is on par with, or better than, the state-of-the-art AT methods on both evaluation metrics. Our code is publicly available at: https://github.com/TrustAI/NRAT.
引用
收藏
页码:3589 / 3610
页数:21
相关论文
共 50 条
  • [1] Nrat: towards adversarial training with inherent label noise
    Chen, Zhen
    Wang, Fu
    Mu, Ronghui
    Xu, Peipei
    Huang, Xiaowei
    Ruan, Wenjie
    [J]. MACHINE LEARNING, 2024, 113 (06) : 3589 - 3610
  • [2] Label Noise in Adversarial Training: A Novel Perspective to Study Robust Overfitting
    Dong, Chengyu
    Liu, Liyuan
    Shang, Jingbo
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [3] Label noise analysis meets adversarial training: A defense against label poisoning in federated learning
    Hallaji, Ehsan
    Razavi-Far, Roozbeh
    Saif, Mehrdad
    Herrera-Viedma, Enrique
    [J]. KNOWLEDGE-BASED SYSTEMS, 2023, 266
  • [4] Modeling Adversarial Noise for Adversarial Training
    Zhou, Dawei
    Wang, Nannan
    Han, Bo
    Liu, Tongliang
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [5] Towards Robust Adversarial Training via Dual-label Supervised and Geometry Constraint
    Cao L.-J.
    Kuang H.-F.
    Liu H.
    Wang Y.
    Zhang B.-C.
    Huang F.-Y.
    Wu Y.-J.
    Ji R.-R.
    [J]. Ruan Jian Xue Bao/Journal of Software, 2022, 33 (04): : 1218 - 1230
  • [6] Open-set Label Noise Can Improve Robustness Against Inherent Label Noise
    Wei, Hongxin
    Tao, Lue
    Xie, Renchunzi
    An, Bo
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [7] Wasserstein Adversarial Regularization for Learning With Label Noise
    Fatras, Kilian
    Damodaran, Bharath Bhushan
    Lobry, Sylvain
    Flamary, Remi
    Tuia, Devis
    Courty, Nicolas
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (10) : 7296 - 7306
  • [8] On Stabilizing Generative Adversarial Training with Noise
    Jenni, Simon
    Favaro, Paolo
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 12137 - 12145
  • [9] Towards training noise-robust anomaly detection via collaborative adversarial flows
    [J]. Luo, Jiaxiang (luojx@scut.edu.cn), 2025, 242
  • [10] Towards Efficient and Effective Adversarial Training
    Sriramanan, Gaurang
    Addepalli, Sravanti
    Baburaj, Arya
    Babu, R. Venkatesh
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,