Adversarial Training with Fast Gradient Projection Method against Synonym Substitution Based Text Attacks

被引:0
|
作者
Wang, Xiaosen [1 ]
Yang, Yichen [1 ]
Deng, Yihe [2 ]
He, Kun [1 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Comp Sci & Technol, Wuhan, Peoples R China
[2] Univ Calif Los Angeles, Dept Comp Sci, Los Angeles, CA 90024 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial training is the most empirically successful approach in improving the robustness of deep neural networks for image classification. For text classification, however, existing synonym substitution based adversarial attacks are effective but not very efficient to be incorporated into practical text adversarial training. Gradient-based attacks, which are very efficient for images, are hard to be implemented for synonym substitution based text attacks due to the lexical, grammatical and semantic constraints and the discrete text input space. Thereby, we propose a fast text adversarial attack method called Fast Gradient Projection Method (FGPM) based on synonym substitution, which is about 20 times faster than existing text attack methods and could achieve similar attack performance. We then incorporate FGPM with adversarial training and propose a text defense method called Adversarial Training with FGPM enhanced by Logit pairing (ATFL). Experiments show that ATFL could significantly improve the model robustness and block the transferability of adversarial examples.
引用
收藏
页码:13997 / 14005
页数:9
相关论文
共 50 条
  • [1] Gradient-based Adversarial Attacks against Text Transformers
    Guo, Chuan
    Sablayrolles, Alexandre
    Jegou, Herve
    Kiela, Douwe
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 5747 - 5757
  • [2] Nesterov Adam Iterative Fast Gradient Method for Adversarial Attacks
    Chen, Cheng
    Wang, Zhiguang
    Fan, Yongnian
    Zhang, Xue
    Li, Dawei
    Lu, Qiang
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT I, 2022, 13529 : 586 - 598
  • [3] Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks
    Wang, Jianyu
    Zhang, Haichao
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 6628 - 6637
  • [4] Text Adversarial Purification as Defense against Adversarial Attacks
    Li, Linyang
    Song, Demin
    Qiu, Xipeng
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 338 - 350
  • [5] Adversarial attacks on videos based on the conjugate gradient method
    Dai, Yang
    Feng, Yanghe
    Huang, Jincai
    Gongcheng Kexue Xuebao/Chinese Journal of Engineering, 2024, 46 (09): : 1630 - 1637
  • [6] Diversified Adversarial Attacks based on Conjugate Gradient Method
    Yamamura, Keiichiro
    Sato, Haruiki
    Tateiwa, Nariaki
    Hata, Nozomi
    Mitsutake, Toru
    Oe, Issa
    Ishikura, Hiroki
    Fujisawa, Katsuki
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [7] Adaptive Gradient-based Word Saliency for adversarial text attacks
    Qi, Yupeng
    Yang, Xinghao
    Liu, Baodi
    Zhang, Kai
    Liu, Weifeng
    NEUROCOMPUTING, 2024, 590
  • [8] Adversarial Attacks on Visual Objects Using the Fast Gradient Sign Method
    Syed Muhammad Ali Naqvi
    Mohammad Shabaz
    Muhammad Attique Khan
    Syeda Iqra Hassan
    Journal of Grid Computing, 2023, 21
  • [9] Adversarial Attacks on Visual Objects Using the Fast Gradient Sign Method
    Naqvi, Syed Muhammad Ali
    Shabaz, Mohammad
    Khan, Muhammad Attique
    Hassan, Syeda Iqra
    JOURNAL OF GRID COMPUTING, 2023, 21 (04)
  • [10] Text information hiding algorithm against synonym substitution
    Dai, Zu-Xu
    Chang, Jian
    Chen, Jing
    Sichuan Daxue Xuebao (Gongcheng Kexue Ban)/Journal of Sichuan University (Engineering Science Edition), 2009, 41 (04): : 186 - 190