Improving Robustness of DNNs against Common Corruptions via Gaussian Adversarial Training

被引:2
|
作者
Yi, Chenyu [1 ,2 ]
Li, Haoliang [1 ,2 ]
Wan, Renjie [1 ,2 ]
Kot, Alex C. [1 ,2 ]
机构
[1] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore, Singapore
[2] Nanyang Technol Univ, Rapid Rich Object Search ROSE Lab, Singapore, Singapore
关键词
Deep Learning; Robustness to Common Corruptions; Adversarial Training; Data Augmentation;
D O I
10.1109/vcip49819.2020.9301856
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks have demonstrated tremendous success in image classification, but their performance sharply degrades when evaluated on slightly different test data (e.g., data with corruptions). To address these issues, we propose a minimax approach to improve common corruption robustness of deep neural networks via Gaussian Adversarial Training. To be specific, we propose to train neural networks with adversarial examples where the perturbations are Gaussian-distributed. Our experiments show that our proposed GAT can improve neural networks' robustness to noise corruptions more than other baseline methods. It also outperforms the state-of-the-art method in improving the overall robustness to common corruptions.
引用
收藏
页码:17 / 20
页数:4
相关论文
共 50 条
  • [1] Improving robustness against common corruptions with frequency biased models
    Saikia, Tonmoy
    Schmid, Cordelia
    Brox, Thomas
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 10191 - 10200
  • [2] Improving robustness against common corruptions by covariate shift adaptation
    Schneider, Steffen
    Rusak, Evgenia
    Eck, Luisa
    Bringmann, Oliver
    Brendel, Wieland
    Bethge, Matthias
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [3] Improving the affordability of robustness training for DNNs
    Gupta, Sidharth
    Dube, Parijat
    Verma, Ashish
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 3383 - 3392
  • [4] Deep Defense: Training DNNs with Improved Adversarial Robustness
    Yan, Ziang
    Guo, Yiwen
    Zhang, Changshui
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [5] Effective and Robust Adversarial Training Against Data and Label Corruptions
    Zhang, Peng-Fei
    Huang, Zi
    Xu, Xin-Shun
    Bai, Guangdong
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 9477 - 9488
  • [6] Sliced Wasserstein adversarial training for improving adversarial robustness
    Lee W.
    Lee S.
    Kim H.
    Lee J.
    Journal of Ambient Intelligence and Humanized Computing, 2024, 15 (08) : 3229 - 3242
  • [7] Enhancing Model Robustness and Accuracy Against Adversarial Attacks via Adversarial Input Training
    Ingle G.
    Pawale S.
    International Journal of Advanced Computer Science and Applications, 2024, 15 (03) : 1210 - 1228
  • [8] Adversarial Minimax Training for Robustness Against Adversarial Examples
    Komiyama, Ryota
    Hattori, Motonobu
    NEURAL INFORMATION PROCESSING (ICONIP 2018), PT II, 2018, 11302 : 690 - 699
  • [9] Improving adversarial robustness of Bayesian neural networks via multi-task adversarial training
    Chen, Xu
    Liu, Chuancai
    Zhao, Yue
    Jia, Zhiyang
    Jin, Ge
    INFORMATION SCIENCES, 2022, 592 : 156 - 173
  • [10] Towards Better Robustness against Common Corruptions for Unsupervised Domain Adaptation
    Gao, Zhiqiang
    Huang, Kaizhu
    Zhang, Rui
    Liu, Dawei
    Ma, Jieming
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 18836 - 18847