SoK: Certified Robustness for Deep Neural Networks

被引:15
|
作者
Li, Linyi [1 ]
Xie, Tao [2 ]
Li, Bo [1 ]
机构
[1] Univ Illinois, Champaign, IL 61820 USA
[2] Peking Univ, MoE, Key Lab High Confidence Software Technol, Beijing, Peoples R China
关键词
certified robustness; neural networks; verification; CERT;
D O I
10.1109/SP46215.2023.10179303
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Great advances in deep neural networks (DNNs) have led to state-of-the-art performance on a wide range of tasks. However, recent studies have shown that DNNs are vulnerable to adversarial attacks, which have brought great concerns when deploying these models to safety-critical applications such as autonomous driving. Different defense approaches have been proposed against adversarial attacks, including: a) empirical defenses, which can usually be adaptively attacked again without providing robustness certification; and b) certifiably robust approaches, which consist of robustness verification providing the lower bound of robust accuracy against any attacks under certain conditions and corresponding robust training approaches. In this paper, we systematize certifiably robust approaches and related practical and theoretical implications and findings. We also provide the first comprehensive benchmark on existing robustness verification and training approaches on different datasets. In particular, we 1) provide a taxonomy for the robustness verification and training approaches, as well as summarize the methodologies for representative algorithms, 2) reveal the characteristics, strengths, limitations, and fundamental connections among these approaches, 3) discuss current research progresses, theoretical barriers, main challenges, and future directions for certifiably robust approaches for DNNs, and 4) provide an open-sourced unified platform to evaluate 20+ representative certifiably robust approaches.
引用
收藏
页码:1289 / 1310
页数:22
相关论文
共 50 条
  • [41] Towards Fast Computation of Certified Robustness for ReLU Networks
    Weng, Tsui-Wei
    Zhang, Huan
    Chen, Hongge
    Song, Zhao
    Hsieh, Cho-Jui
    Boning, Duane
    Dhillon, Inderjit S.
    Daniel, Luca
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [42] Center Smoothing: Certified Robustness for Networks with Structured Outputs
    Kumar, Aounon
    Goldstein, Tom
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [43] Robustness of Sparsely Distributed Representations to Adversarial Attacks in Deep Neural Networks
    Sardar, Nida
    Khan, Sundas
    Hintze, Arend
    Mehra, Priyanka
    [J]. ENTROPY, 2023, 25 (06)
  • [44] Benchmarking the Robustness of Deep Neural Networks to Common Corruptions in Digital Pathology
    College of Computer Science and Technology, Zhejiang University, Hangzhou, China
    不详
    [J]. Lect. Notes Comput. Sci., 2022, (242-252):
  • [45] Research on Robustness of Deep Neural Networks Based Data Preprocessing Techniques
    Zhao, Hong
    Chang, You-kang
    Wang, Wei-jie
    [J]. International Journal of Network Security, 2022, 24 (02): : 243 - 252
  • [46] Enhancing the Robustness of Deep Neural Networks by Meta-Adversarial Training
    Chang, You-Kang
    Zhao, Hong
    Wang, Wei-Jie
    [J]. International Journal of Network Security, 2023, 25 (01) : 122 - 130
  • [47] Improving adversarial robustness of deep neural networks by using semantic information
    Wang, Lina
    Chen, Xingshu
    Tang, Rui
    Yue, Yawei
    Zhu, Yi
    Zeng, Xuemei
    Wang, Wei
    [J]. KNOWLEDGE-BASED SYSTEMS, 2021, 226
  • [48] Improving Adversarial Robustness of Deep Neural Networks via Linear Programming
    Tang, Xiaochao
    Yang, Zhengfeng
    Fu, Xuanming
    Wang, Jianlin
    Zeng, Zhenbing
    [J]. THEORETICAL ASPECTS OF SOFTWARE ENGINEERING, TASE 2022, 2022, 13299 : 326 - 343
  • [49] Comparing the Robustness of Humans and Deep Neural Networks on Facial Expression Recognition
    Leveque, Lucie
    Villoteau, Francois
    Sampaio, Emmanuel V. B.
    Da Silva, Matthieu Perreira
    Le Callet, Patrick
    [J]. ELECTRONICS, 2022, 11 (23)
  • [50] Verifying Attention Robustness of Deep Neural Networks Against Semantic Perturbations
    Munakata, Satoshi
    Urban, Caterina
    Yokoyama, Haruki
    Yamamoto, Koji
    Munakata, Kazuki
    [J]. NASA FORMAL METHODS, NFM 2023, 2023, 13903 : 37 - 61