Is Robustness the Cost of Accuracy? - A Comprehensive Study on the Robustness of 18 Deep Image Classification Models

被引:183
|
作者
Su, Dong [1 ]
Zhang, Huan [2 ]
Chen, Hongge [3 ]
Yi, Jinfeng [4 ]
Chen, Pin-Yu [1 ]
Gao, Yupeng [1 ]
机构
[1] IBM Res, New York, NY 10598 USA
[2] Univ Calif Davis, Davis, CA 95616 USA
[3] MIT, Cambridge, MA 02139 USA
[4] JD AI Res, Beijing, Peoples R China
来源
关键词
Deep neural networks; Adversarial attacks; Robustness;
D O I
10.1007/978-3-030-01258-8_39
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The prediction accuracy has been the long-lasting and sole standard for comparing the performance of different image classification models, including the ImageNet competition. However, recent studies have highlighted the lack of robustness in well-trained deep neural networks to adversarial examples. Visually imperceptible perturbations to natural images can easily be crafted and mislead the image classifiers towards misclassification. To demystify the trade-offs between robustness and accuracy, in this paper we thoroughly benchmark 18 ImageNet models using multiple robustness metrics, including the distortion, success rate and transferability of adversarial examples between 306 pairs of models. Our extensive experimental results reveal several new insights: (1) linear scaling law - the empirical l(2) and l(infinity) distortion metrics scale linearly with the logarithm of classification error; (2) model architecture is a more critical factor to robustness than model size, and the disclosed accuracy-robustness Pareto frontier can be used as an evaluation criterion for ImageNet model designers; (3) for a similar network architecture, increasing network depth slightly improves robustness in l(infinity) distortion; (4) there exist models (in VGG family) that exhibit high adversarial transferability, while most adversarial examples crafted from one model can only be transferred within the same family. Experiment code is publicly available at https://github.com/huanzhang12/Adversarial_Survey.
引用
收藏
页码:644 / 661
页数:18
相关论文
共 50 条
  • [21] Robustness Stress Testing in Medical Image Classification
    Islam, Mobarakol
    Li, Zeju
    Glocker, Ben
    UNCERTAINTY FOR SAFE UTILIZATION OF MACHINE LEARNING IN MEDICAL IMAGING, UNSURE 2023, 2023, 14291 : 167 - 176
  • [22] Robustness and Explainability of Image Classification Based on QCNN
    Chen, Guoming
    Long, Shun
    Yuan, Zeduo
    Li, Wanyi
    Peng, Junfeng
    Quantum Engineering, 2023, 2023
  • [23] Improving Accuracy and Robustness of Space-Time Image Velocimetry (STIV) with Deep Learning
    Watanabe, Ken
    Fujita, Ichiro
    Iguchi, Makiko
    Hasegawa, Makoto
    WATER, 2021, 13 (15)
  • [24] Robustness of models addressing Information Disorder: A comprehensive review and benchmarking study
    Fenza, Giuseppe
    Loia, Vincenzo
    Stanzione, Claudio
    Di Gisi, Maria
    NEUROCOMPUTING, 2024, 596
  • [25] Deep Learning for Improving the Robustness of Image Encryption
    Chen, Jing
    Li, Xiao-Wei
    Wang, Qiong-Hua
    IEEE ACCESS, 2019, 7 : 181083 - 181091
  • [26] The robustness of contextuality and the contextuality cost of empirical models
    Meng, HuiXian
    Cao, HuaiXin
    Wang, WenHua
    SCIENCE CHINA-PHYSICS MECHANICS & ASTRONOMY, 2016, 59 (04) : 1 - 10
  • [27] The robustness of contextuality and the contextuality cost of empirical models
    HuiXian Meng
    HuaiXin Cao
    WenHua Wang
    Science China Physics, Mechanics & Astronomy, 2016, 59
  • [28] The robustness of contextuality and the contextuality cost of empirical models
    HuiXian Meng
    HuaiXin Cao
    WenHua Wang
    Science China(Physics,Mechanics & Astronomy), 2016, Mechanics & Astronomy)2016 (04) : 25 - 34
  • [29] Dongting Lake algal bloom forecasting: Robustness and accuracy analysis of deep learning models
    Liu, Yuxin
    Yang, Bin
    Xie, Kunting
    Sun, Julong
    Zhu, Shumin
    JOURNAL OF HAZARDOUS MATERIALS, 2025, 485
  • [30] The Boundaries of Verifiable Accuracy, Robustness, and Generalisation in Deep Learning
    Bastounis, Alexander
    Gorban, Alexander N.
    Hansen, Anders C.
    Higham, Desmond J.
    Prokhorov, Danil
    Sutton, Oliver
    Tyukin, Ivan Y.
    Zhou, Qinghua
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT I, 2023, 14254 : 530 - 541