Is Robustness the Cost of Accuracy? - A Comprehensive Study on the Robustness of 18 Deep Image Classification Models

被引:183
|
作者
Su, Dong [1 ]
Zhang, Huan [2 ]
Chen, Hongge [3 ]
Yi, Jinfeng [4 ]
Chen, Pin-Yu [1 ]
Gao, Yupeng [1 ]
机构
[1] IBM Res, New York, NY 10598 USA
[2] Univ Calif Davis, Davis, CA 95616 USA
[3] MIT, Cambridge, MA 02139 USA
[4] JD AI Res, Beijing, Peoples R China
来源
关键词
Deep neural networks; Adversarial attacks; Robustness;
D O I
10.1007/978-3-030-01258-8_39
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The prediction accuracy has been the long-lasting and sole standard for comparing the performance of different image classification models, including the ImageNet competition. However, recent studies have highlighted the lack of robustness in well-trained deep neural networks to adversarial examples. Visually imperceptible perturbations to natural images can easily be crafted and mislead the image classifiers towards misclassification. To demystify the trade-offs between robustness and accuracy, in this paper we thoroughly benchmark 18 ImageNet models using multiple robustness metrics, including the distortion, success rate and transferability of adversarial examples between 306 pairs of models. Our extensive experimental results reveal several new insights: (1) linear scaling law - the empirical l(2) and l(infinity) distortion metrics scale linearly with the logarithm of classification error; (2) model architecture is a more critical factor to robustness than model size, and the disclosed accuracy-robustness Pareto frontier can be used as an evaluation criterion for ImageNet model designers; (3) for a similar network architecture, increasing network depth slightly improves robustness in l(infinity) distortion; (4) there exist models (in VGG family) that exhibit high adversarial transferability, while most adversarial examples crafted from one model can only be transferred within the same family. Experiment code is publicly available at https://github.com/huanzhang12/Adversarial_Survey.
引用
收藏
页码:644 / 661
页数:18
相关论文
共 50 条
  • [31] Using ensemble methods to improve the robustness of deep learning for image classification in marine environments
    Wyatt, Mathew
    Radford, Ben
    Callow, Nikolaus
    Bennamoun, Mohammed
    Hickey, Sharyn
    METHODS IN ECOLOGY AND EVOLUTION, 2022, 13 (06): : 1317 - 1328
  • [32] On Single Source Robustness in Deep Fusion Models
    Kim, Taewan
    Ghosh, Joydeep
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [33] Adversarial Robustness of Deep Sensor Fusion Models
    Wang, Shaojie
    Wu, Tong
    Chakrabarti, Ayan
    Vorobeychik, Yevgeniy
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 1371 - 1380
  • [34] A survey on robustness attacks for deep code models
    Qu, Yubin
    Huang, Song
    Yao, Yongming
    AUTOMATED SOFTWARE ENGINEERING, 2024, 31 (02)
  • [35] Robustness of Deep Learning Models for Vision Tasks
    Lee, Youngseok
    Kim, Jongweon
    APPLIED SCIENCES-BASEL, 2023, 13 (07):
  • [36] The Effects of Autoencoders on the Robustness of Deep Learning Models
    Degirmenci, Elif
    Ozcelik, Ilker
    Yazici, Ahmet
    2022 30TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE, SIU, 2022,
  • [37] Robustness of deep learning models on graphs: A survey
    Xu, Jiarong
    Chen, Junru
    You, Siqi
    Xiao, Zhiqing
    Yang, Yang
    Lu, Jiangang
    AI OPEN, 2021, 2 : 69 - 78
  • [38] Toward a Better Tradeoff Between Accuracy and Robustness for Image Classification via Adversarial Feature Diversity
    Xue, Wei
    Wang, Yonghao
    Wang, Yuchi
    Wang, Yue
    Du, Mingyang
    Zheng, Xiao
    IEEE JOURNAL ON MINIATURIZATION FOR AIR AND SPACE SYSTEMS, 2024, 5 (04): : 254 - 264
  • [39] Robustness of Image-Based Malware Classification Models trained with Generative Adversarial Networks
    Reilly, Ciaran
    O'Shaughnessy, Stephen
    Thorpe, Christina
    PROCEEDINGS OF THE 2023 EUROPEAN INTERDISCIPLINARY CYBERSECURITY CONFERENCE, EICC 2023, 2023, : 92 - 99
  • [40] Robustness of image classification using neural network techniques
    Sanjo, K
    PROCEEDINGS OF THE ELEVENTH THEMATIC CONFERENCE - GEOLOGIC REMOTE SENSING: PRACTICAL SOLUTIONS FOR REAL WORLD PROBLEMS, VOL II, 1996, : 159 - 167