CERT-RNN: Towards Certifying the Robustness of Recurrent Neural Networks

被引:18
|
作者
Du, Tianyu [1 ]
Ji, Shouling [1 ,2 ]
Shen, Lujia [1 ]
Zhang, Yao [1 ]
Li, Jinfeng [1 ]
Shi, Jie [3 ]
Fang, Chengfang [3 ]
Yin, Jianwei [1 ,2 ]
Beyah, Raheem [4 ]
Wang, Ting [5 ]
机构
[1] Zhejiang Univ, Hangzhou, Peoples R China
[2] Zhejiang Univ, Binjiang Inst, Hangzhou, Peoples R China
[3] Huawei Int, Singapore, Singapore
[4] Georgia Inst Technol, Atlanta, GA 30332 USA
[5] Penn State Univ, University Pk, PA 16802 USA
基金
美国国家科学基金会;
关键词
deep learning; recurrent neural networks; robustness certification; natural language processing;
D O I
10.1145/3460120.3484538
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Certifiable robustness, the functionality of verifying whether the given region surrounding a data point admits any adversarial example, provides guaranteed security for neural networks deployed in adversarial environments. A plethora of work has been proposed to certify the robustness of feed-forward networks, e.g., FCNs and CNNs. Yet, most existing methods cannot be directly applied to recurrent neural networks (RNNs), due to their sequential inputs and unique operations. In this paper, we present CERT-RNN, a general framework for certifying the robustness of RNNs. Specifically, through detailed analysis for the intrinsic property of the unique function in different ranges, we exhaustively discuss different cases for the exact formula of bounding planes, based on which we design several precise and efficient abstract transformers for the unique calculations in RNNs. CERT-RNN significantly outperforms the state-of-the-art methods (e.g., POPQORN [25]) in terms of (i) effectiveness - it provides much tighter robustness bounds, and (ii) efficiency - it scales to much more complex models. Through extensive evaluation, we validate CERT-RNN's superior performance across various network architectures (e.g., vanilla RNN and LSTM) and applications (e.g., image classification, sentiment analysis, toxic comment detection, and malicious URL detection). For instance, for the RNN-2-32 model on the MNIST sequence dataset, the robustness bound certified by CERT-RNN is on average 1.86 times larger than that by POPQORN. Besides certifying the robustness of given RNNs, CERT-RNN also enables a range of practical applications including evaluating the provable effectiveness for various defenses (i.e., the defense with a larger robustness region is considered to be more robust), improving the robustness of RNNs (i.e., incorporating CERT-RNN with verified robust training) and identifying sensitive words (i.e., the word with the smallest certified robustness bound is considered to be the most sensitive word in a sentence), which helps build more robust and interpretable deep learning systems. We will open-source CERTRNN for facilitating the DNN security research.
引用
收藏
页码:516 / 534
页数:19
相关论文
共 50 条
  • [21] E-RNN: Design Optimization for Efficient Recurrent Neural Networks in FPGAs
    Li, Zhe
    Ding, Caiwen
    Wang, Siyue
    Wen, Wujie
    Zhuo, Youwei
    Liu, Chang
    Qiu, Qinru
    Xu, Wenyao
    Lin, Xue
    Qian, Xuehai
    Wang, Yanzhi
    2019 25TH IEEE INTERNATIONAL SYMPOSIUM ON HIGH PERFORMANCE COMPUTER ARCHITECTURE (HPCA), 2019, : 69 - 80
  • [22] DA-RNN: Semantic Mapping with Data Associated Recurrent Neural Networks
    Xiang, Yu
    Fox, Dieter
    ROBOTICS: SCIENCE AND SYSTEMS XIII, 2017,
  • [23] CS-RNN: efficient training of recurrent neural networks with continuous skips
    Tianyu Chen
    Sheng Li
    Jun Yan
    Neural Computing and Applications, 2022, 34 : 16515 - 16532
  • [24] LSTM Recurrent Neural Network (RNN) for Anomaly Detection in Cellular Mobile Networks
    Al Mamun, S. M. Abdullah
    Beyaz, Mehmet
    MACHINE LEARNING FOR NETWORKING, 2019, 11407 : 222 - 237
  • [25] CS-RNN: efficient training of recurrent neural networks with continuous skips
    Chen, Tianyu
    Li, Sheng
    Yan, Jun
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (19): : 16515 - 16532
  • [26] Improved Recurrent Neural Networks (RNN) based Intelligent Fund Transaction Model
    Hu, Gang
    Ye, Yi
    Zhang, Yin
    Hossain, M. Shantim
    2019 IEEE GLOBECOM WORKSHOPS (GC WKSHPS), 2019,
  • [27] An Abstract Domain for Certifying Neural Networks
    Singh, Gagandeep
    Gehr, Timon
    Puschel, Markus
    Vechev, Martin
    PROCEEDINGS OF THE ACM ON PROGRAMMING LANGUAGES-PACMPL, 2019, 3 (POPL):
  • [28] Towards Improving Robustness of Deep Neural Networks to Adversarial Perturbations
    Amini, Sajjad
    Ghaemmaghami, Shahrokh
    IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (07) : 1889 - 1903
  • [29] SBO-RNN: Reformulating Recurrent Neural Networks via Stochastic Bilevel Optimization
    Zhang, Ziming
    Yue, Yun
    Wu, Guojun
    Li, Yanhua
    Zhang, Haichong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [30] Question-Answer System on Episodic Data Using Recurrent Neural Networks (RNN)
    Yadav, Vineet
    Bharadwaj, Vishnu
    Bhatt, Alok
    Rawal, Ayush
    DATA MANAGEMENT, ANALYTICS AND INNOVATION, ICDMAI 2019, VOL 1, 2020, 1042 : 555 - 568