PROVEN: Verifying Robustness of Neural Networks with a Probabilistic Approach

被引:0
|
作者
Weng, Tsui-Wei [1 ]
Chen, Pin-Yu [2 ]
Nguyen, Lam M. [2 ]
Squillante, Mark S. [2 ]
Boopathy, Akhilan [1 ]
Oseledets, Ivan [3 ]
Daniel, Luca [1 ]
机构
[1] MIT EECS, Cambridge, MA 02142 USA
[2] IBM Res, Yorktown Hts, NY USA
[3] Skoltech, Moscow, Russia
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose a novel framework PROVEN to PRObabilistically VErify Neural network's robustness with statistical guarantees. PROVEN provides probability certificates of neural network robustness when the input perturbation follow distributional characterization. Notably, PROVEN is derived from current state-of-the-art worst-case neural network robustness verification frameworks, and therefore it can provide probability certificates with little computational overhead on top of existing methods such as FastLin, CROWN and CNN-Cert. Experiments on small and large MNIST and CIFAR neural network models demonstrate our probabilistic approach can tighten up robustness certificate to around 1.8 x and 3.5x with at least a 99.99% confidence compared with the worst-case robustness certificate by CROWN and CNN-Cert.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Robustness of Neural Networks: A Probabilistic and Practical Approach
    Mangal, Ravi
    Noi, Aditya, V
    Orso, Alessandro
    [J]. 2019 IEEE/ACM 41ST INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: NEW IDEAS AND EMERGING RESULTS (ICSE-NIER 2019), 2019, : 93 - 96
  • [2] Probabilistic Robustness Quantification of Neural Networks
    Kishan, Gopi
    [J]. THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 15966 - 15967
  • [3] CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks
    Pautov, Mikhail
    Tursynbek, Nurislam
    Munkhoeva, Marina
    Muravev, Nikita
    Petiushko, Aleksandr
    Oseledets, Ivan
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 7975 - 7983
  • [4] Verifying Attention Robustness of Deep Neural Networks Against Semantic Perturbations
    Munakata, Satoshi
    Urban, Caterina
    Yokoyama, Haruki
    Yamamoto, Koji
    Munakata, Kazuki
    [J]. NASA FORMAL METHODS, NFM 2023, 2023, 13903 : 37 - 61
  • [5] Verifying Attention Robustness of Deep Neural Networks against Semantic Perturbations
    Munakata, Satoshi
    Urban, Caterina
    Yokoyama, Haruki
    Yamamoto, Koji
    Munakata, Kazuki
    [J]. 2022 29TH ASIA-PACIFIC SOFTWARE ENGINEERING CONFERENCE, APSEC, 2022, : 560 - 561
  • [6] Towards Verifying Robustness of Neural Networks Against A Family of Semantic Perturbations
    Mohapatra, Jeet
    Weng, Tsui-Wei
    Chen, Pin-Yu
    Liu, Sijia
    Daniel, Luca
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 241 - 249
  • [7] Towards Verifying the Geometric Robustness of Large-Scale Neural Networks
    Wang, Fu
    Xu, Peipei
    Ruan, Wenjie
    Huang, Xiaowei
    [J]. THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 12, 2023, : 15197 - 15205
  • [8] MPBP: Verifying Robustness of Neural Networks with Multi-path Bound Propagation
    Zheng, Ye
    Liu, Jiaxiang
    Shi, Xiaomu
    [J]. PROCEEDINGS OF THE 30TH ACM JOINT MEETING EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, ESEC/FSE 2022, 2022, : 1692 - 1696
  • [9] Probabilistic robustness estimates for feed-forward neural networks
    Couellan, Nicolas
    [J]. NEURAL NETWORKS, 2021, 142 : 138 - 147
  • [10] Bounding the Complexity of Formally Verifying Neural Networks: A Geometric Approach
    Ferlez, James
    Shoukry, Yasser
    [J]. 2021 60TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2021, : 5104 - 5109