CERT-RNN: Towards Certifying the Robustness of Recurrent Neural Networks

被引:18
|
作者
Du, Tianyu [1 ]
Ji, Shouling [1 ,2 ]
Shen, Lujia [1 ]
Zhang, Yao [1 ]
Li, Jinfeng [1 ]
Shi, Jie [3 ]
Fang, Chengfang [3 ]
Yin, Jianwei [1 ,2 ]
Beyah, Raheem [4 ]
Wang, Ting [5 ]
机构
[1] Zhejiang Univ, Hangzhou, Peoples R China
[2] Zhejiang Univ, Binjiang Inst, Hangzhou, Peoples R China
[3] Huawei Int, Singapore, Singapore
[4] Georgia Inst Technol, Atlanta, GA 30332 USA
[5] Penn State Univ, University Pk, PA 16802 USA
基金
美国国家科学基金会;
关键词
deep learning; recurrent neural networks; robustness certification; natural language processing;
D O I
10.1145/3460120.3484538
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Certifiable robustness, the functionality of verifying whether the given region surrounding a data point admits any adversarial example, provides guaranteed security for neural networks deployed in adversarial environments. A plethora of work has been proposed to certify the robustness of feed-forward networks, e.g., FCNs and CNNs. Yet, most existing methods cannot be directly applied to recurrent neural networks (RNNs), due to their sequential inputs and unique operations. In this paper, we present CERT-RNN, a general framework for certifying the robustness of RNNs. Specifically, through detailed analysis for the intrinsic property of the unique function in different ranges, we exhaustively discuss different cases for the exact formula of bounding planes, based on which we design several precise and efficient abstract transformers for the unique calculations in RNNs. CERT-RNN significantly outperforms the state-of-the-art methods (e.g., POPQORN [25]) in terms of (i) effectiveness - it provides much tighter robustness bounds, and (ii) efficiency - it scales to much more complex models. Through extensive evaluation, we validate CERT-RNN's superior performance across various network architectures (e.g., vanilla RNN and LSTM) and applications (e.g., image classification, sentiment analysis, toxic comment detection, and malicious URL detection). For instance, for the RNN-2-32 model on the MNIST sequence dataset, the robustness bound certified by CERT-RNN is on average 1.86 times larger than that by POPQORN. Besides certifying the robustness of given RNNs, CERT-RNN also enables a range of practical applications including evaluating the provable effectiveness for various defenses (i.e., the defense with a larger robustness region is considered to be more robust), improving the robustness of RNNs (i.e., incorporating CERT-RNN with verified robust training) and identifying sensitive words (i.e., the word with the smallest certified robustness bound is considered to be the most sensitive word in a sentence), which helps build more robust and interpretable deep learning systems. We will open-source CERTRNN for facilitating the DNN security research.
引用
收藏
页码:516 / 534
页数:19
相关论文
共 50 条
  • [41] FiC-RNN: A Multi-FPGA Acceleration Framework for Deep Recurrent Neural Networks
    Sun, Yuxi
    Amano, Hideharu
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2020, E103D (12) : 2457 - 2462
  • [42] GR-RNN: Global-context residual recurrent neural networks for writer identification
    He, Sheng
    Schomaker, Lambert
    PATTERN RECOGNITION, 2021, 117
  • [43] SS-RNN: A Strengthened Skip Algorithm for Data Classification Based on Recurrent Neural Networks
    Cao, Wenjie
    Shi, Ya-Zhou
    Qiu, Huahai
    Zhang, Bengong
    FRONTIERS IN GENETICS, 2021, 12
  • [44] Convergence and robustness of bounded recurrent neural networks for solving dynamic Lyapunov equations
    Wang, Guancheng
    Hao, Zhihao
    Zhang, Bob
    Jin, Long
    INFORMATION SCIENCES, 2022, 588 : 106 - 123
  • [45] Improving the robustness of noisy MFCC features using minimal recurrent neural networks
    Potamitis, I
    Fakotakis, N
    Kokkinakis, G
    IJCNN 2000: PROCEEDINGS OF THE IEEE-INNS-ENNS INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOL V, 2000, : 271 - 276
  • [46] Towards Interpreting Recurrent Neural Networks through Probabilistic Abstraction
    Dong, Guoliang
    Wang, Jingyi
    Sun, Jun
    Zhang, Yang
    Wang, Xinyu
    Dai, Ting
    Dong, Jin Song
    Wang, Xingen
    2020 35TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING (ASE 2020), 2020, : 499 - 510
  • [47] Towards traffic matrix prediction with LSTM recurrent neural networks
    Zhao, Jianlong
    Qu, Hua
    Zhao, Jihong
    Jiang, Dingchao
    ELECTRONICS LETTERS, 2018, 54 (09) : 566 - 567
  • [48] Towards lifelong learning of Recurrent Neural Networks for control design
    Bonassi, Fabio
    Xie, Jing
    Farina, Marcello
    Scattolini, Riccardo
    2022 EUROPEAN CONTROL CONFERENCE (ECC), 2022, : 2018 - 2023
  • [49] Certifying unknown genuine multipartite entanglement by neural networks
    Chen, Zhenyu
    Lin, Xiaodie
    Wei, Zhaohui
    QUANTUM SCIENCE AND TECHNOLOGY, 2023, 8 (03)
  • [50] Reliability of analytical systems: use of control charts, time series models and recurrent neural networks (RNN)
    Rius, A
    Ruisanchez, I
    Callao, MP
    Rius, FX
    CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS, 1998, 40 (01) : 1 - 18