CERT-RNN: Towards Certifying the Robustness of Recurrent Neural Networks

被引:18
|
作者
Du, Tianyu [1 ]
Ji, Shouling [1 ,2 ]
Shen, Lujia [1 ]
Zhang, Yao [1 ]
Li, Jinfeng [1 ]
Shi, Jie [3 ]
Fang, Chengfang [3 ]
Yin, Jianwei [1 ,2 ]
Beyah, Raheem [4 ]
Wang, Ting [5 ]
机构
[1] Zhejiang Univ, Hangzhou, Peoples R China
[2] Zhejiang Univ, Binjiang Inst, Hangzhou, Peoples R China
[3] Huawei Int, Singapore, Singapore
[4] Georgia Inst Technol, Atlanta, GA 30332 USA
[5] Penn State Univ, University Pk, PA 16802 USA
基金
美国国家科学基金会;
关键词
deep learning; recurrent neural networks; robustness certification; natural language processing;
D O I
10.1145/3460120.3484538
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Certifiable robustness, the functionality of verifying whether the given region surrounding a data point admits any adversarial example, provides guaranteed security for neural networks deployed in adversarial environments. A plethora of work has been proposed to certify the robustness of feed-forward networks, e.g., FCNs and CNNs. Yet, most existing methods cannot be directly applied to recurrent neural networks (RNNs), due to their sequential inputs and unique operations. In this paper, we present CERT-RNN, a general framework for certifying the robustness of RNNs. Specifically, through detailed analysis for the intrinsic property of the unique function in different ranges, we exhaustively discuss different cases for the exact formula of bounding planes, based on which we design several precise and efficient abstract transformers for the unique calculations in RNNs. CERT-RNN significantly outperforms the state-of-the-art methods (e.g., POPQORN [25]) in terms of (i) effectiveness - it provides much tighter robustness bounds, and (ii) efficiency - it scales to much more complex models. Through extensive evaluation, we validate CERT-RNN's superior performance across various network architectures (e.g., vanilla RNN and LSTM) and applications (e.g., image classification, sentiment analysis, toxic comment detection, and malicious URL detection). For instance, for the RNN-2-32 model on the MNIST sequence dataset, the robustness bound certified by CERT-RNN is on average 1.86 times larger than that by POPQORN. Besides certifying the robustness of given RNNs, CERT-RNN also enables a range of practical applications including evaluating the provable effectiveness for various defenses (i.e., the defense with a larger robustness region is considered to be more robust), improving the robustness of RNNs (i.e., incorporating CERT-RNN with verified robust training) and identifying sensitive words (i.e., the word with the smallest certified robustness bound is considered to be the most sensitive word in a sentence), which helps build more robust and interpretable deep learning systems. We will open-source CERTRNN for facilitating the DNN security research.
引用
收藏
页码:516 / 534
页数:19
相关论文
共 50 条
  • [31] Property-Directed Verification and Robustness Certification of Recurrent Neural Networks
    Khmelnitsky, Igor
    Neider, Daniel
    Roy, Rajarshi
    Xie, Xuan
    Barbot, Benoit
    Bollig, Benedikt
    Finkel, Alain
    Haddad, Serge
    Leucker, Martin
    Ye, Lina
    AUTOMATED TECHNOLOGY FOR VERIFICATION AND ANALYSIS, ATVA 2021, 2021, 12971 : 364 - 380
  • [32] Robustness analysis and training of recurrent neural networks using dissipativity theory
    Pauli, Patricia
    Berberich, Julian
    Allgoewer, Frank
    AT-AUTOMATISIERUNGSTECHNIK, 2022, 70 (08) : 730 - 739
  • [33] Assessing the Robustness of Recurrent Neural Networks to Enhance the Spectrum of Reverberated Speech
    Paniagua-Penaranda, Carolina
    Zeledon-Cordoba, Marisol
    Coto-Jimenez, Marvin
    HIGH PERFORMANCE COMPUTING, CARLA 2019, 2020, 1087 : 276 - 290
  • [34] DeepState: Selecting Test Suites to Enhance the Robustness of Recurrent Neural Networks
    Liu, Zixi
    Feng, Yang
    Yin, Yining
    Chen, Zhenyu
    2022 ACM/IEEE 44TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2022), 2022, : 598 - 609
  • [35] Towards a unified theory of correlations in recurrent neural networks
    Moritz Helias
    Tom Tetzlaff
    Markus Diesmann
    BMC Neuroscience, 12 (Suppl 1)
  • [36] NQF-RNN: probabilistic forecasting via neural quantile function-based recurrent neural networks
    Song, Jungyoon
    Chang, Woojin
    Song, Jae Wook
    APPLIED INTELLIGENCE, 2025, 55 (02)
  • [37] Towards Verifying the Geometric Robustness of Large-Scale Neural Networks
    Wang, Fu
    Xu, Peipei
    Ruan, Wenjie
    Huang, Xiaowei
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 12, 2023, : 15197 - 15205
  • [38] CSTAR: Towards Compact and Structured Deep Neural Networks with Adversarial Robustness
    Phan, Huy
    Yin, Miao
    Sui, Yang
    Yuan, Bo
    Zonouz, Saman
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 2, 2023, : 2065 - 2073
  • [39] RC-RNN: Reconfigurable Cache Architecture for Storage Systems Using Recurrent Neural Networks
    Ebrahimi, Shahriar
    Salkhordeh, Reza
    Osia, Seyed Ali
    Taheri, Ali
    Rabiee, Hamid R.
    Asadi, Hossen
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2022, 10 (03) : 1492 - 1506
  • [40] Towards Verifying Robustness of Neural Networks Against A Family of Semantic Perturbations
    Mohapatra, Jeet
    Weng, Tsui-Wei
    Chen, Pin-Yu
    Liu, Sijia
    Daniel, Luca
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 241 - 249