COMBATING FALSE SENSE OF SECURITY: BREAKING THE DEFENSE OF ADVERSARIAL TRAINING VIA NON-GRADIENT ADVERSARIAL ATTACK

被引:0
|
作者
Fan, Mingyuan [1 ]
Liu, Yang [2 ]
Chen, Cen [3 ]
Yu, Shengxing [4 ]
Guo, Wenzhong [1 ]
Liu, Ximeng [1 ]
机构
[1] Fuzhou Univ, Coll Comp & Data Sci, Fuzhou, Peoples R China
[2] Xidian Univ, Sch Cyber Engn, Xian, Peoples R China
[3] East China Normal Univ, Sch Data Sci & Engn, Shanghai, Peoples R China
[4] Peking Univ, Sch Elect Engn & Comp Sci, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
adversarial attack; adversarial training; non-gradient attack;
D O I
10.1109/ICASSP43922.2022.9746138
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Adversarial training is believed to be the most robust and effective defense method against adversarial attacks. Gradientbased adversarial attack methods are generally adopted to evaluate the effectiveness of adversarial training. However, in this paper, by diving into the existing adversarial attack literature, we find that adversarial examples generated by these attack methods tend to be less imperceptible, which may lead to an inaccurate estimation for the effectiveness of the adversarial training. The existing adversarial attacks mostly adopt gradient-based optimization methods and such optimization methods have difficulties in searching the most effective adversarial examples (i.e., the global extreme points). On the contrast, in this work, we propose a novel Non-Gradient Attack (NGA) to overcome the above-mentioned problem. Extensive experiments show that NGA significantly outperforms the state-of-the-art adversarial attacks on Attack Success Rate (ASR) by 2% similar to 7%.
引用
收藏
页码:3293 / 3297
页数:5
相关论文
共 44 条
  • [1] Attack-less adversarial training for a robust adversarial defense
    Jiacang Ho
    Byung-Gook Lee
    Dae-Ki Kang
    [J]. Applied Intelligence, 2022, 52 : 4364 - 4381
  • [2] Attack-less adversarial training for a robust adversarial defense
    Ho, Jiacang
    Lee, Byung-Gook
    Kang, Dae-Ki
    [J]. APPLIED INTELLIGENCE, 2022, 52 (04) : 4364 - 4381
  • [3] Link Prediction Adversarial Attack Via Iterative Gradient Attack
    Chen, Jinyin
    Lin, Xiang
    Shi, Ziqiang
    Liu, Yi
    [J]. IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2020, 7 (04) : 1081 - 1094
  • [4] Gradient Sign Inversion: Making an Adversarial Attack a Good Defense
    Ji, Xiaojian
    Dong, Li
    Wang, Rangding
    Yan, Diqun
    Yin, Yang
    Tian, Jinyu
    [J]. 2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [5] Boosting Adversarial Transferability via Gradient Relevance Attack
    Zhu, Hegui
    Ren, Yuchen
    Sui, Xiaoyan
    Yang, Lianping
    Jiang, Wuming
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 4718 - 4727
  • [6] Attack Agnostic Adversarial Defense via Visual Imperceptible Bound
    Chhabra, Saheb
    Agarwal, Akshay
    Singh, Richa
    Vatsa, Mayank
    [J]. 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 5302 - 5309
  • [7] A Survey of Adversarial Attack and Defense Methods for Malware Classification in Cyber Security
    Yan, Senming
    Ren, Jing
    Wang, Wei
    Sun, Limin
    Zhang, Wei
    Yu, Quan
    [J]. IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2023, 25 (01): : 467 - 496
  • [8] Attack and Defense: Adversarial Security of Data-Driven FDC Systems
    Zhuo, Yue
    Yin, Zhenqin
    Ge, Zhiqiang
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (01) : 5 - 19
  • [9] Introduction to the Special Section on Artificial Intelligence Security: Adversarial Attack and Defense
    Du, Xiaojiang
    Susilo, Willy
    Guizani, Mohsen
    Tian, Zhihong
    [J]. IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2021, 8 (02): : 905 - 907
  • [10] UNIVERSAL ADVERSARIAL ATTACK VIA ENHANCED PROJECTED GRADIENT DESCENT
    Deng, Yingpeng
    Karam, Lina J.
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 1241 - 1245