Scalable and Modular Robustness Analysis of Deep Neural Networks

被引:1
|
作者
Zhong, Yuyi [1 ]
Ta, Quang-Trung [1 ]
Luo, Tianzuo [1 ]
Zhang, Fanlong [2 ]
Khoo, Siau-Cheng [1 ]
机构
[1] Natl Univ Singapore, Sch Comp, Singapore, Singapore
[2] Guangdong Univ Technol, Sch Comp, Guangzhou, Peoples R China
基金
新加坡国家研究基金会;
关键词
Abstract interpretation; Formal verification; Neural nets; VERIFICATION;
D O I
10.1007/978-3-030-89051-3_1
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
As neural networks are trained to be deeper and larger, the scalability of neural network analyzer is urgently required. The main technical insight of our method is modularly analyzing neural networks by segmenting a network into blocks and conduct the analysis for each block. In particular, we propose the network block summarization technique to capture the behaviors within a network block using a block summary and leverage the summary to speed up the analysis process. We instantiate our method in the context of a CPU-version of the state-of-the-art analyzer DeepPoly and name our system as Bounded-Block Poly (BBPoly). We evaluate BBPoly extensively on various experiment settings. The experimental result indicates that our method yields comparable precision as DeepPoly but runs faster and requires less computational resources. Especially, BBPoly can analyze really large neural networks like SkipNet or ResNet that contain up to one million neurons in less than around 1 hour per input image, while DeepPoly needs to spend even 40 hours to analyze one image.
引用
收藏
页码:3 / 22
页数:20
相关论文
共 50 条
  • [1] ε-Weakened Robustness of Deep Neural Networks
    Huang, Pei
    Yang, Yuting
    Liu, Minghao
    Jia, Fuqi
    Ma, Feifei
    Zhang, Jian
    [J]. PROCEEDINGS OF THE 31ST ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2022, 2022, : 126 - 138
  • [2] Robustness guarantees for deep neural networks on videos
    Wu, Min
    Kwiatkowska, Marta
    [J]. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2020, : 308 - 317
  • [3] Robustness Guarantees for Deep Neural Networks on Videos
    Wu, Min
    Kwiatkowska, Marta
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 308 - 317
  • [4] Robustness Verification Boosting for Deep Neural Networks
    Feng, Chendong
    [J]. 2019 6TH INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND CONTROL ENGINEERING (ICISCE 2019), 2019, : 531 - 535
  • [5] Analyzing the Noise Robustness of Deep Neural Networks
    Liu, Mengchen
    Liu, Shixia
    Su, Hang
    Cao, Kelei
    Zhu, Jun
    [J]. 2018 IEEE CONFERENCE ON VISUAL ANALYTICS SCIENCE AND TECHNOLOGY (VAST), 2018, : 60 - 71
  • [6] SoK: Certified Robustness for Deep Neural Networks
    Li, Linyi
    Xie, Tao
    Li, Bo
    [J]. 2023 IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP, 2023, : 1289 - 1310
  • [7] Adversarial robustness improvement for deep neural networks
    Charis Eleftheriadis
    Andreas Symeonidis
    Panagiotis Katsaros
    [J]. Machine Vision and Applications, 2024, 35
  • [8] ROBUSTNESS OF DEEP NEURAL NETWORKS IN ADVERSARIAL EXAMPLES
    Teng, Da
    Song, Xiao m
    Gong, Guanghong
    Han, Liang
    [J]. INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING-THEORY APPLICATIONS AND PRACTICE, 2017, 24 (02): : 123 - 133
  • [9] Adversarial robustness improvement for deep neural networks
    Eleftheriadis, Charis
    Symeonidis, Andreas
    Katsaros, Panagiotis
    [J]. MACHINE VISION AND APPLICATIONS, 2024, 35 (03)
  • [10] Impact of Colour on Robustness of Deep Neural Networks
    De, Kanjar
    Pedersen, Marius
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 21 - 30