Toward Transferable Attack via Adversarial Diffusion in Face Recognition

被引:0
|
作者
Hu, Cong [1 ,2 ]
Li, Yuanbo [1 ,2 ]
Feng, Zhenhua [3 ]
Wu, Xiaojun [1 ,2 ]
机构
[1] Jiangnan Univ, Sch Artificial Intelligence & Comp Sci, Wuxi 214122, Jiangsu, Peoples R China
[2] Jiangnan Univ, Jiangsu Prov Lab Pattern Recognit & Computat Intel, Wuxi 214122, Jiangsu, Peoples R China
[3] Univ Surrey, Sch Comp Sci & Elect Engn, Guildford GU2 7XH, England
基金
中国国家自然科学基金;
关键词
Face recognition; deep convolutional neural networks; adversarial example; transferable attack; diffusion model;
D O I
10.1109/TIFS.2024.3402167
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Modern face recognition systems widely use deep convolutional neural networks (DCNNs). However, DCNNs are susceptible to adversarial examples, posing security risks to these systems. Transferable adversarial examples that can be transferred from surrogate to target models greatly undermine the robustness of DCNNs. Numerous attempts have been made to generate transferable adversarial examples, but the existing methods often suffer from limited transferability or produce adversarial examples with poor image perceptual quality. Recently, diffusion models have shown remarkable success in image generation and have excelled in various downstream tasks. However, their potential in adversarial attacks remains largely unexplored. To bridge this gap, we propose a novel approach, namely Adversarial Diffusion Attack (ADA), in generation of transferable adversarial facial examples. ADA employs a dynamic game-like strategy between injection and denoising that progressively reinforces the robustness of adversarial perturbation in the reverse process of diffusion model. Additionally, both adversarial perturbation and residual image are embedded to drift benign distribution towards adversarial distribution, crafting adversarial examples with high image quality. Extensive experimental results obtained on two benchmarking datasets, LFW and CelebA-HQ, demonstrate that ADA achieves higher attack success rates and produces adversarial examples with superior image quality compared to the state-of-the-art methods.
引用
收藏
页码:5506 / 5519
页数:14
相关论文
共 50 条
  • [1] Towards Transferable Adversarial Attack Against Deep Face Recognition
    Zhong, Yaoyao
    Deng, Weihong
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 : 1452 - 1466
  • [2] Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face Recognition
    Jia, Shuai
    Yin, Bangjie
    Yao, Taiping
    Ding, Shouhong
    Shen, Chunhua
    Yang, Xiaokang
    Ma, Chao
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [3] Speckle-Variant Attack: Toward Transferable Adversarial Attack to SAR Target Recognition
    Peng, Bowen
    Peng, Bo
    Zhou, Jie
    Xia, Jingyuan
    Liu, Li
    [J]. IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
  • [4] Hierarchical feature transformation attack: Generate transferable adversarial examples for face recognition
    Li, Yuanbo
    Hu, Cong
    Wang, Rui
    Wu, Xiaojun
    [J]. Applied Soft Computing, 2025, 172
  • [5] Sibling-Attack: Rethinking Transferable Adversarial Attacks against Face Recognition
    Li, Zexin
    Yin, Bangjie
    Yao, Taiping
    Guo, Junfeng
    Ding, Shouhong
    Chen, Simin
    Liu, Cong
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 24626 - 24637
  • [6] Crafting Transferable Adversarial Examples Against Face Recognition via Gradient Eroding
    Zhou H.
    Wang Y.
    Tan Y.-A.
    Wu S.
    Zhao Y.
    Zhang Q.
    Li Y.
    [J]. IEEE Transactions on Artificial Intelligence, 2024, 5 (01): : 412 - 419
  • [7] Transferable Black-Box Attack Against Face Recognition With Spatial Mutable Adversarial Patch
    Ma, Haotian
    Xu, Ke
    Jiang, Xinghao
    Zhao, Zeyu
    Sun, Tanfeng
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 5636 - 5650
  • [8] Transferable Sparse Adversarial Attack on Modulation Recognition With Generative Networks
    Jiang, Zenghui
    Zeng, Weijun
    Zhou, Xingyu
    Chen, Pu
    Yin, Shenqian
    [J]. IEEE COMMUNICATIONS LETTERS, 2024, 28 (05) : 999 - 1003
  • [9] Stealthy Physical Masked Face Recognition Attack via Adversarial Style Optimization
    Gong, Huihui
    Dong, Minjing
    Ma, Siqi
    Camtepe, Seyit
    Nepal, Surya
    Xu, Chang
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 5014 - 5025
  • [10] Generative Transferable Adversarial Attack
    Li, Yifeng
    Zhang, Ya
    Zhang, Rui
    Wang, Yanfeng
    [J]. ICVIP 2019: PROCEEDINGS OF 2019 3RD INTERNATIONAL CONFERENCE ON VIDEO AND IMAGE PROCESSING, 2019, : 84 - 89