Crafting Adversarial Input Sequences for Recurrent Neural Networks

被引:0
|
作者
Papernot, Nicolas [1 ]
McDaniel, Patrick [1 ]
Swami, Ananthram [2 ]
Harang, Richard [2 ]
机构
[1] Penn State Univ, University Pk, PA 16802 USA
[2] US Army, Res Lab, Adelphi, MD USA
关键词
D O I
暂无
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Machine learning models are frequently used to solve complex security problems, as well as to make decisions in sensitive situations like guiding autonomous vehicles or predicting financial market behaviors. Previous efforts have shown that numerous machine learning models are vulnerable to adversarial manipulations of their inputs taking the form of adversarial samples. Such inputs are crafted by adding carefully selected perturbations to legitimate inputs so as to force the machine learning model to misbehave, for instance by outputting a wrong class if the machine learning task of interest is classification. In fact, to the best of our knowledge, all previous work on adversarial samples crafting for neural networks considered models used to solve classification tasks, most frequently in computer vision applications. In this paper, we investigate adversarial input sequences for recurrent neural networks processing sequential data. We show that the classes of algorithms introduced previously to craft adversarial samples misclassified by feed-forward neural networks can be adapted to recurrent neural networks. In a experiment, we show that adversaries can craft adversarial sequences misleading both categorical and sequential recurrent neural networks.
引用
收藏
页码:49 / 54
页数:6
相关论文
共 50 条
  • [1] Adversarial Dropout for Recurrent Neural Networks
    Park, Sungrae
    Song, Kyungwoo
    Ji, Mingi
    Lee, Wonsung
    Moon, Il-Chul
    [J]. THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 4699 - 4706
  • [2] Generating Adversarial Texts for Recurrent Neural Networks
    Liu, Chang
    Lin, Wang
    Yang, Zhengfeng
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2020, PT I, 2020, 12396 : 39 - 51
  • [3] Dynamically Computing Adversarial Perturbations for Recurrent Neural Networks
    Deka, Shankar A.
    Stipanovic, Dusan M.
    Tomlin, Claire J.
    [J]. IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, 2022, 30 (06) : 2615 - 2629
  • [4] Recurrent Generative Adversarial Neural Networks for Compressive Imaging
    Mardani, Morteza
    Gong, Enhao
    Cheng, Joseph Y.
    Pauly, John
    Xing, Lei
    [J]. 2017 IEEE 7TH INTERNATIONAL WORKSHOP ON COMPUTATIONAL ADVANCES IN MULTI-SENSOR ADAPTIVE PROCESSING (CAMSAP), 2017,
  • [5] Audio Adversarial Examples Generation with Recurrent Neural Networks
    Chang, Kuei-Huan
    Huang, Po-Hao
    Yu, Honggang
    Jin, Yier
    Wang, Ting-Chi
    [J]. 2020 25TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE, ASP-DAC 2020, 2020, : 488 - 493
  • [6] Password Guessing Based on Recurrent Neural Networks and Generative Adversarial Networks
    Wang, Ding
    Zou, Yun-Kai
    Tao, Yi
    Wang, Bin
    [J]. Jisuanji Xuebao/Chinese Journal of Computers, 2021, 44 (08): : 1519 - 1534
  • [7] Crafting Adversarial Examples for Neural Machine Translation
    Zhang, Xinze
    Zhang, Junzhe
    Chen, Zhenhua
    He, Kun
    [J]. 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021), 2021, : 1967 - 1977
  • [8] Crafting adversarial example with adaptive root mean square gradient on deep neural networks
    Xiao, Yatie
    Pun, Chi-Man
    Liu, Bo
    [J]. NEUROCOMPUTING, 2020, 389 : 179 - 195
  • [9] Input space bifurcation manifolds of recurrent neural networks
    Haschke, R
    Steil, JJ
    [J]. NEUROCOMPUTING, 2005, 64 : 25 - 38
  • [10] Adversarial Attacks with Defense Mechanisms on Convolutional Neural Networks and Recurrent Neural Networks for Malware Classification
    Alzaidy, Sharoug
    Binsalleeh, Hamad
    [J]. APPLIED SCIENCES-BASEL, 2024, 14 (04):