SoK: Certified Robustness for Deep Neural Networks

被引:15
|
作者
Li, Linyi [1 ]
Xie, Tao [2 ]
Li, Bo [1 ]
机构
[1] Univ Illinois, Champaign, IL 61820 USA
[2] Peking Univ, MoE, Key Lab High Confidence Software Technol, Beijing, Peoples R China
关键词
certified robustness; neural networks; verification; CERT;
D O I
10.1109/SP46215.2023.10179303
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Great advances in deep neural networks (DNNs) have led to state-of-the-art performance on a wide range of tasks. However, recent studies have shown that DNNs are vulnerable to adversarial attacks, which have brought great concerns when deploying these models to safety-critical applications such as autonomous driving. Different defense approaches have been proposed against adversarial attacks, including: a) empirical defenses, which can usually be adaptively attacked again without providing robustness certification; and b) certifiably robust approaches, which consist of robustness verification providing the lower bound of robust accuracy against any attacks under certain conditions and corresponding robust training approaches. In this paper, we systematize certifiably robust approaches and related practical and theoretical implications and findings. We also provide the first comprehensive benchmark on existing robustness verification and training approaches on different datasets. In particular, we 1) provide a taxonomy for the robustness verification and training approaches, as well as summarize the methodologies for representative algorithms, 2) reveal the characteristics, strengths, limitations, and fundamental connections among these approaches, 3) discuss current research progresses, theoretical barriers, main challenges, and future directions for certifiably robust approaches for DNNs, and 4) provide an open-sourced unified platform to evaluate 20+ representative certifiably robust approaches.
引用
收藏
页码:1289 / 1310
页数:22
相关论文
共 50 条
  • [31] Enhancing the Robustness of Deep Neural Networks from "Smart" Compression
    Liu, Tao
    Liu, Zihao
    Liu, Qi
    Wen, Wujie
    [J]. 2018 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI (ISVLSI), 2018, : 528 - 532
  • [32] Achieving Generalizable Robustness of Deep Neural Networks by Stability Training
    Laermann, Jan
    Samek, Wojciech
    Strodthoff, Nils
    [J]. PATTERN RECOGNITION, DAGM GCPR 2019, 2019, 11824 : 360 - 373
  • [33] Improving the Robustness of Deep Neural Networks via Stability Training
    Zheng, Stephan
    Song, Yang
    Leung, Thomas
    Goodfellow, Ian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 4480 - 4488
  • [34] A Geometrical Approach to Evaluate the Adversarial Robustness of Deep Neural Networks
    Wang, Yang
    Dong, Bo
    Xu, Ke
    Piao, Haiyin
    Ding, Yufei
    Yin, Baocai
    Yang, Xin
    [J]. ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2023, 19 (05)
  • [35] Towards Improving Robustness of Deep Neural Networks to Adversarial Perturbations
    Amini, Sajjad
    Ghaemmaghami, Shahrokh
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (07) : 1889 - 1903
  • [36] Eager Falsification for Accelerating Robustness Verification of Deep Neural Networks
    Guo, Xingwu
    Wan, Wenjie
    Zhang, Zhaodi
    Zhang, Min
    Song, Fu
    Wen, Xuejun
    [J]. 2021 IEEE 32ND INTERNATIONAL SYMPOSIUM ON SOFTWARE RELIABILITY ENGINEERING (ISSRE 2021), 2021, : 345 - 356
  • [37] Safety and Robustness for Deep Neural Networks: An Automotive Use Case
    Bacciu, Davide
    Carta, Antonio
    Gallicchio, Claudio
    Schmittner, Christoph
    [J]. COMPUTER SAFETY, RELIABILITY, AND SECURITY, SAFECOMP 2023 WORKSHOPS, 2023, 14182 : 95 - 107
  • [38] An efficient test method for noise robustness of deep neural networks
    Yasuda, Muneki
    Sakata, Hironori
    Cho, Seung-Il
    Harada, Tomochika
    Tanaka, Atushi
    Yokoyama, Michio
    [J]. IEICE NONLINEAR THEORY AND ITS APPLICATIONS, 2019, 10 (02): : 221 - 235
  • [39] Solving Inverse Problems With Deep Neural Networks - Robustness Included?
    Genzel, Martin
    Macdonald, Jan
    Marz, Maximilian
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (01) : 1119 - 1134
  • [40] A Parallel Optimization Method for Robustness Verification of Deep Neural Networks
    Lin, Renhao
    Zhou, Qinglei
    Nan, Xiaofei
    Hu, Tianqing
    [J]. MATHEMATICS, 2024, 12 (12)