DeepFake detection against adversarial examples based on D-VAEGAN

被引:1
|
作者
Chen, Ping [1 ,2 ]
Xu, Ming [1 ,3 ]
Qi, Jianxiang [1 ]
机构
[1] Hangzhou Dianzi Univ, Sch Cyberspace, Hangzhou, Zhejiang Provin, Peoples R China
[2] Minnan Sci & Technol Univ, Sch Comp Informat, Quanzhou, Fujian Province, Peoples R China
[3] Hangzhou Dianzi Univ, Sch Cyberspace, Hangzhou 310018, Zhejiang Provin, Peoples R China
关键词
computer vision; image denoising;
D O I
10.1049/ipr2.12973
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent years, the development of DeepFake has raise a lot of security problems. Therefore, detection of DeepFake is critical. However, the existing DeepFake detection methods are often vulnerable to adversarial attacks, i.e. adding carefully crafted imperceptible perturbations into forged images is possible to evade detection. In this paper, a DeepFake detection method based on image denoising is proposed by combining variational autoencoder (VAE) and generative adversarial network (GAN), namely D-VAEGAN. Firstly, an encoder is designed to extract the features of the image in a low-dimensional latent space. Then, a decoder reconstructs the original clean image using the features in this low-dimensional latent space. Secondly, an auxiliary discriminative network is introduced to further improve the performance of the model, which improves the quality of the reconstructed images. Furthermore, feature similarity loss is added as a penalty term to the reconstruction optimization function to improve the adversarial robustness. Experimental results on the FaceForensics++ dataset show that the proposed approach significantly outperforms the five adversarial training-based defence methods. The approach achieves 96% in accuracy, which is on average about 50% higher than other comparison methods. The existing DeepFake detection methods are often vulnerable to adversarial attacks, so we propose a DeepFake detection method based on image denoising by combining variational autoencoder (VAE) and generative adversarial network (GAN), namely D-VAEGAN.image
引用
收藏
页码:615 / 626
页数:12
相关论文
共 50 条
  • [41] Adversarial Examples Against Image-based Malware Classification Systems
    Vi, Bao Ngoc
    Nguyen, Huu Noi
    Nguyen, Ngoc Tran
    Tran, Cao Truong
    PROCEEDINGS OF 2019 11TH INTERNATIONAL CONFERENCE ON KNOWLEDGE AND SYSTEMS ENGINEERING (KSE 2019), 2019, : 347 - 351
  • [42] HF-Defend: Defending Against Adversarial Examples Based on Halftoning
    Liu, Gaozhi
    Li, Sheng
    Qian, Zhenxing
    Zhang, Xinpeng
    2022 IEEE 24TH INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP), 2022,
  • [43] Towards Robust Detection of Adversarial Examples
    Pang, Tianyu
    Du, Chao
    Dong, Yinpeng
    Zhu, Jun
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [44] The Framework of Cross-Domain and Model Adversarial Attack against Deepfake
    Qiu, Haoxuan
    Du, Yanhui
    Lu, Tianliang
    FUTURE INTERNET, 2022, 14 (02)
  • [45] MAFD: Multiple Adversarial Features Detector for Enhanced Detection of Text-Based Adversarial Examples
    Jin, Kaiwen
    Xiong, Yifeng
    Lou, Shuya
    Yu, Zhen
    NEURAL PROCESSING LETTERS, 2024, 56 (06)
  • [46] ERROR DIFFUSION HALFTONING AGAINST ADVERSARIAL EXAMPLES
    Lo, Shao-Yuan
    Patel, Vishal M.
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3892 - 3896
  • [47] Hadamard's Defense Against Adversarial Examples
    Hoyos, Angello
    Ruiz, Ubaldo
    Chavez, Edgar
    IEEE ACCESS, 2021, 9 : 118324 - 118333
  • [48] Deep neural rejection against adversarial examples
    Angelo Sotgiu
    Ambra Demontis
    Marco Melis
    Battista Biggio
    Giorgio Fumera
    Xiaoyi Feng
    Fabio Roli
    EURASIP Journal on Information Security, 2020
  • [49] Background Class Defense Against Adversarial Examples
    McCoyd, Michael
    Wagner, David
    2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (SPW 2018), 2018, : 96 - 102
  • [50] MoNet: Impressionism As A Defense Against Adversarial Examples
    Ge, Huangyi
    Chau, Sze Yiu
    Li, Ninghui
    2020 SECOND IEEE INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS AND APPLICATIONS (TPS-ISA 2020), 2020, : 246 - 255