AVA: Inconspicuous Attribute Variation-based Adversarial Attack bypassing DeepFake Detection

被引:0
|
作者
Meng, Xiangtao [1 ]
Wang, Li [1 ]
Guo, Shanqing [1 ]
Ju, Lei [1 ]
Zhao, Qingchuan [2 ]
机构
[1] Shandong Univ, Jinan, Peoples R China
[2] City Univ Hong Kong, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/SP54263.2024.00155
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
While DeepFake applications are becoming popular in recent years, their abuses pose a serious privacy threat. Unfortunately, most related detection algorithms to mitigate the abuse issues are inherently vulnerable to adversarial attacks because they are built atop DNN-based classification models, and the literature has demonstrated that they could be bypassed by introducing pixel-level perturbations. Though corresponding mitigation has been proposed, we have identified a new attribute-variation-based adversarial attack (AVA) that perturbs the latent space via a combination of Gaussian prior and semantic discriminator to bypass such mitigation. It perturbs the semantics in the attribute space of DeepFake images, which are inconspicuous to human beings (e.g., mouth open) but can result in substantial differences in DeepFake detection. We evaluate our proposed AVA attack on nine state-of-the-art DeepFake detection algorithms and applications. The empirical results demonstrate that AVA attack defeats the state-of-the-art black box attacks against DeepFake detectors and achieves more than a 95% success rate on two commercial DeepFake detectors. Moreover, our human study indicates that AVA-generated DeepFake images are often imperceptible to humans, which presents huge security and privacy concerns.
引用
收藏
页码:74 / 90
页数:17
相关论文
共 50 条
  • [1] Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face Recognition
    Jia, Shuai
    Yin, Bangjie
    Yao, Taiping
    Ding, Shouhong
    Shen, Chunhua
    Yang, Xiaokang
    Ma, Chao
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [2] EnsembleDet: ensembling against adversarial attack on deepfake detection
    Dutta, Himanshu
    Pandey, Aditya
    Bilgaiyan, Saurabh
    JOURNAL OF ELECTRONIC IMAGING, 2021, 30 (06)
  • [3] An adversarial attack approach for eXplainable AI evaluation on deepfake detection models
    Gowrisankar, Balachandar
    Thing, Vrizlynn L. L.
    COMPUTERS & SECURITY, 2024, 139
  • [4] Metamorphic Testing-based Adversarial Attack to Fool Deepfake Detectors
    Lim, Nyee Thoang
    Kuan, Meng Yi
    Pu, Muxin
    Lim, Mei Kuan
    Chong, Chun Yong
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 2503 - 2509
  • [5] Malafide: a novel adversarial convolutive noise attack against deepfake and spoofing detection systems
    Panariello, Michele
    Ge, Wanying
    Tak, Hemlata
    Todisco, Massimiliano
    Evans, Nicholas
    INTERSPEECH 2023, 2023, : 2868 - 2872
  • [6] Deep-Learning-Based Small Surface Defect Detection via an Exaggerated Local Variation-Based Generative Adversarial Network
    Lian, Jian
    Jia, Weikuan
    Zareapoor, Masoumeh
    Zheng, Yuanjie
    Luo, Rong
    Jain, Deepak Kumar
    Kumar, Neeraj
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2020, 16 (02) : 1343 - 1351
  • [7] DeepFake detection against adversarial examples based on D-VAEGAN
    Chen, Ping
    Xu, Ming
    Qi, Jianxiang
    IET IMAGE PROCESSING, 2024, 18 (03) : 615 - 626
  • [8] Performance analysis of entropy variation-based detection of DDoS attacks in IoT
    Pandey, Nimisha
    Mishra, Pramod Kumar
    INTERNET OF THINGS, 2023, 23
  • [9] Reputation Defender: Local Black-Box Adversarial Attack against Image-Translation-Based DeepFake
    Yang, Wang
    Zhao, Lingchen
    Ye, Dengpan
    2024 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME 2024, 2024,
  • [10] Adversarial attack detection framework based on optimized weighted conditional stepwise adversarial network
    Barik, Kousik
    Misra, Sanjay
    Fernandez-Sanz, Luis
    INTERNATIONAL JOURNAL OF INFORMATION SECURITY, 2024, 23 (03) : 2353 - 2376