Scalable and Modular Robustness Analysis of Deep Neural Networks

被引:1
|
作者
Zhong, Yuyi [1 ]
Ta, Quang-Trung [1 ]
Luo, Tianzuo [1 ]
Zhang, Fanlong [2 ]
Khoo, Siau-Cheng [1 ]
机构
[1] Natl Univ Singapore, Sch Comp, Singapore, Singapore
[2] Guangdong Univ Technol, Sch Comp, Guangzhou, Peoples R China
基金
新加坡国家研究基金会;
关键词
Abstract interpretation; Formal verification; Neural nets; VERIFICATION;
D O I
10.1007/978-3-030-89051-3_1
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
As neural networks are trained to be deeper and larger, the scalability of neural network analyzer is urgently required. The main technical insight of our method is modularly analyzing neural networks by segmenting a network into blocks and conduct the analysis for each block. In particular, we propose the network block summarization technique to capture the behaviors within a network block using a block summary and leverage the summary to speed up the analysis process. We instantiate our method in the context of a CPU-version of the state-of-the-art analyzer DeepPoly and name our system as Bounded-Block Poly (BBPoly). We evaluate BBPoly extensively on various experiment settings. The experimental result indicates that our method yields comparable precision as DeepPoly but runs faster and requires less computational resources. Especially, BBPoly can analyze really large neural networks like SkipNet or ResNet that contain up to one million neurons in less than around 1 hour per input image, while DeepPoly needs to spend even 40 hours to analyze one image.
引用
收藏
页码:3 / 22
页数:20
相关论文
共 50 条
  • [41] Accelerating Spectral Normalization for Enhancing Robustness of Deep Neural Networks
    Pan, Zhixin
    Mishra, Prabhat
    [J]. 2021 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI (ISVLSI 2021), 2021, : 260 - 265
  • [42] Enhancing the Robustness of Deep Neural Networks from "Smart" Compression
    Liu, Tao
    Liu, Zihao
    Liu, Qi
    Wen, Wujie
    [J]. 2018 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI (ISVLSI), 2018, : 528 - 532
  • [43] Achieving Generalizable Robustness of Deep Neural Networks by Stability Training
    Laermann, Jan
    Samek, Wojciech
    Strodthoff, Nils
    [J]. PATTERN RECOGNITION, DAGM GCPR 2019, 2019, 11824 : 360 - 373
  • [44] Improving the Robustness of Deep Neural Networks via Stability Training
    Zheng, Stephan
    Song, Yang
    Leung, Thomas
    Goodfellow, Ian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 4480 - 4488
  • [45] A Geometrical Approach to Evaluate the Adversarial Robustness of Deep Neural Networks
    Wang, Yang
    Dong, Bo
    Xu, Ke
    Piao, Haiyin
    Ding, Yufei
    Yin, Baocai
    Yang, Xin
    [J]. ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2023, 19 (05)
  • [46] Towards Improving Robustness of Deep Neural Networks to Adversarial Perturbations
    Amini, Sajjad
    Ghaemmaghami, Shahrokh
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (07) : 1889 - 1903
  • [47] Eager Falsification for Accelerating Robustness Verification of Deep Neural Networks
    Guo, Xingwu
    Wan, Wenjie
    Zhang, Zhaodi
    Zhang, Min
    Song, Fu
    Wen, Xuejun
    [J]. 2021 IEEE 32ND INTERNATIONAL SYMPOSIUM ON SOFTWARE RELIABILITY ENGINEERING (ISSRE 2021), 2021, : 345 - 356
  • [48] Safety and Robustness for Deep Neural Networks: An Automotive Use Case
    Bacciu, Davide
    Carta, Antonio
    Gallicchio, Claudio
    Schmittner, Christoph
    [J]. COMPUTER SAFETY, RELIABILITY, AND SECURITY, SAFECOMP 2023 WORKSHOPS, 2023, 14182 : 95 - 107
  • [49] An efficient test method for noise robustness of deep neural networks
    Yasuda, Muneki
    Sakata, Hironori
    Cho, Seung-Il
    Harada, Tomochika
    Tanaka, Atushi
    Yokoyama, Michio
    [J]. IEICE NONLINEAR THEORY AND ITS APPLICATIONS, 2019, 10 (02): : 221 - 235
  • [50] Solving Inverse Problems With Deep Neural Networks - Robustness Included?
    Genzel, Martin
    Macdonald, Jan
    Marz, Maximilian
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (01) : 1119 - 1134