GNP ATTACK: TRANSFERABLE ADVERSARIAL EXAMPLES VIA GRADIENT NORM PENALTY

被引:0
|
作者
Wu, Tao [1 ]
Luo, Tie [1 ]
Wunsch, Donald C. [2 ]
机构
[1] Missouri Univ Sci & Technol, Dept Comp Sci, Rolla, MO 65409 USA
[2] Missouri Univ Sci & Technol, Dept Elect & Comp Engn, Rolla, MO USA
关键词
Adversarial machine learning; Transferability; Deep neural networks; Input gradient regularization;
D O I
10.1109/ICIP49359.2023.10223158
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial examples (AE) with good transferability enable practical black-box attacks on diverse target models, where insider knowledge about the target models is not required. Previous methods often generate AE with no or very limited transferability; that is, they easily overfit to the particular architecture and feature representation of the source, white-box model and the generated AE barely work for target, blackbox models. In this paper, we propose a novel approach to enhance AE transferability using Gradient Norm Penalty (GNP). It drives the loss function optimization procedure to converge to a flat region of local optima in the loss landscape. By attacking 11 state-of-the-art (SOTA) deep learning models and 6 advanced defense methods, we empirically show that GNP is very effective in generating AE with high transferability. We also demonstrate that it is very flexible in that it can be easily integrated with other gradient based methods for stronger transfer-based attacks.
引用
收藏
页码:3110 / 3114
页数:5
相关论文
共 50 条
  • [1] Push & Pull: Transferable Adversarial Examples With Attentive Attack
    Gao, Lianli
    Huang, Zijie
    Song, Jingkuan
    Yang, Yang
    Shen, Heng Tao
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 2329 - 2338
  • [2] Direction-aggregated Attack for Transferable Adversarial Examples
    Huang, Tianjin
    Menkovski, Vlado
    Pei, Yulong
    Wang, Yuhao
    Pechenizkiy, Mykola
    ACM JOURNAL ON EMERGING TECHNOLOGIES IN COMPUTING SYSTEMS, 2022, 18 (03)
  • [3] Crafting Transferable Adversarial Examples Against Face Recognition via Gradient Eroding
    Zhou H.
    Wang Y.
    Tan Y.-A.
    Wu S.
    Zhao Y.
    Zhang Q.
    Li Y.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (01): : 412 - 419
  • [4] Learning Transferable Adversarial Examples via Ghost Networks
    Li, Yingwei
    Bai, Song
    Zhou, Yuyin
    Xie, Cihang
    Zhang, Zhishuai
    Yuille, Alan
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 11458 - 11465
  • [5] Hierarchical feature transformation attack: Generate transferable adversarial examples for face recognition
    Li, Yuanbo
    Hu, Cong
    Wang, Rui
    Wu, Xiaojun
    Applied Soft Computing, 2025, 172
  • [6] Efficient Adversarial Training with Transferable Adversarial Examples
    Zheng, Haizhong
    Zhang, Ziqi
    Gu, Juncheng
    Lee, Honglak
    Prakash, Atul
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 1178 - 1187
  • [7] Generative Transferable Adversarial Attack
    Li, Yifeng
    Zhang, Ya
    Zhang, Rui
    Wang, Yanfeng
    ICVIP 2019: PROCEEDINGS OF 2019 3RD INTERNATIONAL CONFERENCE ON VIDEO AND IMAGE PROCESSING, 2019, : 84 - 89
  • [8] Improving transferable adversarial attack via feature-momentum
    He, Xianglong
    Li, Yuezun
    Qu, Haipeng
    Dong, Junyu
    COMPUTERS & SECURITY, 2023, 128
  • [9] Towards the transferable audio adversarial attack via ensemble methods
    Guo, Feng
    Sun, Zheng
    Chen, Yuxuan
    Ju, Lei
    CYBERSECURITY, 2023, 6 (01)
  • [10] Towards the transferable audio adversarial attack via ensemble methods
    Feng Guo
    Zheng Sun
    Yuxuan Chen
    Lei Ju
    Cybersecurity, 6