CERT-RNN: Towards Certifying the Robustness of Recurrent Neural Networks

被引:18
|
作者
Du, Tianyu [1 ]
Ji, Shouling [1 ,2 ]
Shen, Lujia [1 ]
Zhang, Yao [1 ]
Li, Jinfeng [1 ]
Shi, Jie [3 ]
Fang, Chengfang [3 ]
Yin, Jianwei [1 ,2 ]
Beyah, Raheem [4 ]
Wang, Ting [5 ]
机构
[1] Zhejiang Univ, Hangzhou, Peoples R China
[2] Zhejiang Univ, Binjiang Inst, Hangzhou, Peoples R China
[3] Huawei Int, Singapore, Singapore
[4] Georgia Inst Technol, Atlanta, GA 30332 USA
[5] Penn State Univ, University Pk, PA 16802 USA
基金
美国国家科学基金会;
关键词
deep learning; recurrent neural networks; robustness certification; natural language processing;
D O I
10.1145/3460120.3484538
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Certifiable robustness, the functionality of verifying whether the given region surrounding a data point admits any adversarial example, provides guaranteed security for neural networks deployed in adversarial environments. A plethora of work has been proposed to certify the robustness of feed-forward networks, e.g., FCNs and CNNs. Yet, most existing methods cannot be directly applied to recurrent neural networks (RNNs), due to their sequential inputs and unique operations. In this paper, we present CERT-RNN, a general framework for certifying the robustness of RNNs. Specifically, through detailed analysis for the intrinsic property of the unique function in different ranges, we exhaustively discuss different cases for the exact formula of bounding planes, based on which we design several precise and efficient abstract transformers for the unique calculations in RNNs. CERT-RNN significantly outperforms the state-of-the-art methods (e.g., POPQORN [25]) in terms of (i) effectiveness - it provides much tighter robustness bounds, and (ii) efficiency - it scales to much more complex models. Through extensive evaluation, we validate CERT-RNN's superior performance across various network architectures (e.g., vanilla RNN and LSTM) and applications (e.g., image classification, sentiment analysis, toxic comment detection, and malicious URL detection). For instance, for the RNN-2-32 model on the MNIST sequence dataset, the robustness bound certified by CERT-RNN is on average 1.86 times larger than that by POPQORN. Besides certifying the robustness of given RNNs, CERT-RNN also enables a range of practical applications including evaluating the provable effectiveness for various defenses (i.e., the defense with a larger robustness region is considered to be more robust), improving the robustness of RNNs (i.e., incorporating CERT-RNN with verified robust training) and identifying sensitive words (i.e., the word with the smallest certified robustness bound is considered to be the most sensitive word in a sentence), which helps build more robust and interpretable deep learning systems. We will open-source CERTRNN for facilitating the DNN security research.
引用
收藏
页码:516 / 534
页数:19
相关论文
共 50 条
  • [1] CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks
    Boopathy, Akhilan
    Weng, Tsui-Wei
    Chen, Pin-Yu
    Liu, Sijia
    Daniel, Luca
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 3240 - 3247
  • [2] Towards Certifying the Asymmetric Robustness for Neural Networks: Quantification and Applications
    Li, Changjiang
    Ji, Shouling
    Weng, Haiqin
    Li, Bo
    Shi, Jie
    Beyah, Raheem
    Guo, Shanqing
    Wang, Zonghui
    Wang, Ting
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2022, 19 (06) : 3987 - 4001
  • [3] Certifying Geometric Robustness of Neural Networks
    Balunovic, Mislav
    Baader, Maximilian
    Singh, Gagandeep
    Gehr, Timon
    Vechev, Martin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [4] Towards Certifying l∞ Robustness using Neural Networks with l∞-dist Neurons
    Zhang, Bohang
    Cai, Tianle
    Lu, Zhou
    He, Di
    Wang, Liwei
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [5] CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks
    Pautov, Mikhail
    Tursynbek, Nurislam
    Munkhoeva, Marina
    Muravev, Nikita
    Petiushko, Aleksandr
    Oseledets, Ivan
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 7975 - 7983
  • [6] Towards Evaluating the Robustness of Neural Networks
    Carlini, Nicholas
    Wagner, David
    2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, : 39 - 57
  • [7] Survey on Robustness Verification of Feedforward Neural Networks and Recurrent Neural Networks
    Liu Y.
    Yang P.-F.
    Zhang L.-J.
    Wu Z.-L.
    Feng Y.
    Ruan Jian Xue Bao/Journal of Software, 2023, 34 (07): : 1 - 33
  • [8] POPQORN: Quantifying Robustness of Recurrent Neural Networks
    Ko, Ching-Yun
    Lyu, Zhaoyang
    Weng, Tsui-Wei
    Daniel, Luca
    Wong, Ngai
    Lin, Dahua
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [9] An RNN-Based Framework for the MILP Problem in Robustness Verification of Neural Networks
    Xue, Hao
    Zeng, Xia
    Lin, Wang
    Yang, Zhengfeng
    Peng, Chao
    Zeng, Zhenbing
    COMPUTER VISION - ACCV 2022, PT I, 2023, 13841 : 571 - 586
  • [10] RNN-Stega: Linguistic Steganography Based on Recurrent Neural Networks
    Yang, Zhong-Liang
    Guo, Xiao-Qing
    Chen, Zi-Ming
    Huang, Yong-Feng
    Zhang, Yu-Jin
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2019, 14 (05) : 1280 - 1295