DeepFake detection against adversarial examples based on D-VAEGAN

被引:1
|
作者
Chen, Ping [1 ,2 ]
Xu, Ming [1 ,3 ]
Qi, Jianxiang [1 ]
机构
[1] Hangzhou Dianzi Univ, Sch Cyberspace, Hangzhou, Zhejiang Provin, Peoples R China
[2] Minnan Sci & Technol Univ, Sch Comp Informat, Quanzhou, Fujian Province, Peoples R China
[3] Hangzhou Dianzi Univ, Sch Cyberspace, Hangzhou 310018, Zhejiang Provin, Peoples R China
关键词
computer vision; image denoising;
D O I
10.1049/ipr2.12973
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent years, the development of DeepFake has raise a lot of security problems. Therefore, detection of DeepFake is critical. However, the existing DeepFake detection methods are often vulnerable to adversarial attacks, i.e. adding carefully crafted imperceptible perturbations into forged images is possible to evade detection. In this paper, a DeepFake detection method based on image denoising is proposed by combining variational autoencoder (VAE) and generative adversarial network (GAN), namely D-VAEGAN. Firstly, an encoder is designed to extract the features of the image in a low-dimensional latent space. Then, a decoder reconstructs the original clean image using the features in this low-dimensional latent space. Secondly, an auxiliary discriminative network is introduced to further improve the performance of the model, which improves the quality of the reconstructed images. Furthermore, feature similarity loss is added as a penalty term to the reconstruction optimization function to improve the adversarial robustness. Experimental results on the FaceForensics++ dataset show that the proposed approach significantly outperforms the five adversarial training-based defence methods. The approach achieves 96% in accuracy, which is on average about 50% higher than other comparison methods. The existing DeepFake detection methods are often vulnerable to adversarial attacks, so we propose a DeepFake detection method based on image denoising by combining variational autoencoder (VAE) and generative adversarial network (GAN), namely D-VAEGAN.image
引用
收藏
页码:615 / 626
页数:12
相关论文
共 50 条
  • [1] EnsembleDet: ensembling against adversarial attack on deepfake detection
    Dutta, Himanshu
    Pandey, Aditya
    Bilgaiyan, Saurabh
    JOURNAL OF ELECTRONIC IMAGING, 2021, 30 (06)
  • [2] Defense Against Adversarial Attacks on Audio DeepFake Detection
    Kawa, Piotr
    Plata, Marcin
    Syga, Piotr
    INTERSPEECH 2023, 2023, : 5276 - 5280
  • [3] Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to Adversarial Examples
    Hussain, Shehzeen
    Neekhara, Paarth
    Jere, Malhar
    Koushanfar, Fatinaz
    McAuley, Julian
    2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WACV 2021, 2021, : 3347 - 3356
  • [4] Detection Based Defense Against Adversarial Examples From the Steganalysis Point of View
    Liu, Jiayang
    Zhang, Weiming
    Zhang, Yiwei
    Hou, Dongdong
    Liu, Yujia
    Zha, Hongyue
    Yu, Nenghai
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 4820 - 4829
  • [5] Adversarial Examples Against the Deep Learning Based Network Intrusion Detection Systems
    Yang, Kaichen
    Liu, Jianqing
    Zhang, Chi
    Fang, Yuguang
    2018 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2018), 2018, : 559 - 564
  • [6] On the Effect of Adversarial Training Against Invariance-based Adversarial Examples
    Rauter, Roland
    Nocker, Martin
    Merkle, Florian
    Schoettle, Pascal
    PROCEEDINGS OF 2023 8TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING TECHNOLOGIES, ICMLT 2023, 2023, : 54 - 60
  • [7] Detection of Adversarial Examples Based on the Neurons Distribution
    Zeng, Jiang-Yi
    Chen, Chi-Yuan
    Cho, Hsin-Hung
    INFORMATION SECURITY PRACTICE AND EXPERIENCE, ISPEC 2022, 2022, 13620 : 397 - 405
  • [8] Adversarial Threats to DeepFake Detection: A Practical Perspective
    Neekhara, Paarth
    Dolhansky, Brian
    Bitton, Joanna
    Ferrer, Cristian Canton
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 923 - 932
  • [9] Deepfake-Image Anti-Forensics with Adversarial Examples Attacks
    Fan, Li
    Li, Wei
    Cui, Xiaohui
    FUTURE INTERNET, 2021, 13 (11)
  • [10] Assessing Transferability of Adversarial Examples against Malware Detection Classifiers
    Wang, Yixiang
    Liu, Jiqiang
    Chang, Xiaolin
    CF '19 - PROCEEDINGS OF THE 16TH ACM INTERNATIONAL CONFERENCE ON COMPUTING FRONTIERS, 2019, : 211 - 214