Adversarial Examples for Models of Code

被引:66
|
作者
Yefet, Noam [1 ]
Alon, Uri [1 ]
Yahav, Eran [1 ]
机构
[1] Technion, Haifa, Israel
关键词
Adversarial Attacks; Targeted Attacks; Neural Models of Code;
D O I
10.1145/3428230
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Neural models of code have shown impressive results when performing tasks such as predicting method names and identifying certain kinds of bugs. We show that these models are vulnerable to adversarial examples, and introduce a novel approach for attacking trained models of code using adversarial examples. The main idea of our approach is to force a given trained model to make an incorrect prediction, as specified by the adversary, by introducing small perturbations that do not change the program's semantics, thereby creating an adversarial example. To find such perturbations, we present a new technique for Discrete Adversarial Manipulation of Programs (DAMP). DAMP works by deriving the desired prediction with respect to the model's inputs, while holding the model weights constant, and following the gradients to slightly modify the input code. We show that our DAMP attack is effective across three neural architectures: CODE2VEC, GGNN, and CNN-FILM, in both Java and C#. Our evaluations demonstrate that DAMP has up to 89% success rate in changing a prediction to the adversary's choice (a targeted attack) and a success rate of up to 94% in changing a given prediction to any incorrect prediction (a non-targeted attack). To defend a model against such attacks, we empirically examine a variety of possible defenses and discuss their trade-offs. We show that some of these defenses can dramatically drop the success rate of the attacker, with a minor penalty of 2% relative degradation in accuracy when they are not performing under attack.
引用
收藏
页数:30
相关论文
共 50 条
  • [1] Generating Adversarial Examples for Holding Robustness of Source Code Processing Models
    Zhang, Huangzhao
    Li, Zhuo
    Li, Ge
    Ma, Lei
    Liu, Yang
    Jinl, Zhi
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 1169 - 1176
  • [2] Adversarial examples for generative models
    Kos, Jernej
    Fischer, Ian
    Song, Dawn
    2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (SPW 2018), 2018, : 36 - 42
  • [3] Surreptitious Adversarial Examples through Functioning QR Code
    Chindaudom, Aran
    Siritanawan, Prarinya
    Sumongkayothin, Karin
    Kotani, Kazunori
    JOURNAL OF IMAGING, 2022, 8 (05)
  • [4] Discrete Adversarial Attack to Models of Code
    Gao, Fengjuan
    Wang, Yu
    Wang, Ke
    PROCEEDINGS OF THE ACM ON PROGRAMMING LANGUAGES-PACMPL, 2023, 7 (PLDI): : 172 - 195
  • [5] Generating adversarial examples with collaborative generative models
    Xu, Lei
    Zhai, Junhai
    INTERNATIONAL JOURNAL OF INFORMATION SECURITY, 2024, 23 (02) : 1077 - 1091
  • [6] Detecting Adversarial Examples Using Surrogate Models
    Feldsar, Borna
    Mayer, Rudolf
    Rauber, Andreas
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2023, 5 (04): : 1796 - 1825
  • [7] Constructing Unrestricted Adversarial Examples with Generative Models
    Song, Yang
    Shu, Rui
    Kushman, Nate
    Ermon, Stefano
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [8] Generating adversarial examples with collaborative generative models
    Lei Xu
    Junhai Zhai
    International Journal of Information Security, 2024, 23 : 1077 - 1091
  • [9] Explaining Deep Learning Models with Constrained Adversarial Examples
    Moore, Jonathan
    Hammerla, Nils
    Watkins, Chris
    PRICAI 2019: TRENDS IN ARTIFICIAL INTELLIGENCE, PT I, 2019, 11670 : 43 - 56
  • [10] Generating Audio Adversarial Examples with Ensemble Substituted Models
    Zhang, Yun
    Li, Hongwei
    Xu, Guowen
    Luo, Xizhao
    Dong, Guishan
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2021), 2021,