DeepFake detection against adversarial examples based on D-VAEGAN

被引:1
|
作者
Chen, Ping [1 ,2 ]
Xu, Ming [1 ,3 ]
Qi, Jianxiang [1 ]
机构
[1] Hangzhou Dianzi Univ, Sch Cyberspace, Hangzhou, Zhejiang Provin, Peoples R China
[2] Minnan Sci & Technol Univ, Sch Comp Informat, Quanzhou, Fujian Province, Peoples R China
[3] Hangzhou Dianzi Univ, Sch Cyberspace, Hangzhou 310018, Zhejiang Provin, Peoples R China
关键词
computer vision; image denoising;
D O I
10.1049/ipr2.12973
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent years, the development of DeepFake has raise a lot of security problems. Therefore, detection of DeepFake is critical. However, the existing DeepFake detection methods are often vulnerable to adversarial attacks, i.e. adding carefully crafted imperceptible perturbations into forged images is possible to evade detection. In this paper, a DeepFake detection method based on image denoising is proposed by combining variational autoencoder (VAE) and generative adversarial network (GAN), namely D-VAEGAN. Firstly, an encoder is designed to extract the features of the image in a low-dimensional latent space. Then, a decoder reconstructs the original clean image using the features in this low-dimensional latent space. Secondly, an auxiliary discriminative network is introduced to further improve the performance of the model, which improves the quality of the reconstructed images. Furthermore, feature similarity loss is added as a penalty term to the reconstruction optimization function to improve the adversarial robustness. Experimental results on the FaceForensics++ dataset show that the proposed approach significantly outperforms the five adversarial training-based defence methods. The approach achieves 96% in accuracy, which is on average about 50% higher than other comparison methods. The existing DeepFake detection methods are often vulnerable to adversarial attacks, so we propose a DeepFake detection method based on image denoising by combining variational autoencoder (VAE) and generative adversarial network (GAN), namely D-VAEGAN.image
引用
收藏
页码:615 / 626
页数:12
相关论文
共 50 条
  • [31] Audio-deepfake detection: Adversarial attacks and countermeasures
    Rabhi, Mouna
    Bakiras, Spiridon
    Di Pietro, Roberto
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 250
  • [32] Adversarial Examples Detection of Radio Signals Based on Multifeature Fusion
    Xu, Dongwei
    Yang, Hao
    Gu, Chuntao
    Chen, Zhuangzhi
    Xuan, Qi
    Yang, Xiaoniu
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2021, 68 (12) : 3607 - 3611
  • [33] LSD: Adversarial Examples Detection Based on Label Sequences Discrepancy
    Zhang, Shigeng
    Chen, Shuxin
    Hua, Chengyao
    Li, Zhetao
    Li, Yanchun
    Liu, Xuan
    Chen, Kai
    Li, Zhankai
    Wang, Weiping
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 5133 - 5147
  • [34] Detection of Adversarial Examples Based on Sensitivities to Noise Removal Filter
    Higashi, Akinori
    Kuribayashi, Minoru
    Funabiki, Nobuo
    Nguyen, Huy H.
    Echizen, Isao
    2020 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2020, : 1386 - 1391
  • [35] Black-box attacks against log anomaly detection with adversarial examples
    Lu, Siyang
    Wang, Mingquan
    Wang, Dongdong
    Wei, Xiang
    Xiao, Sizhe
    Wang, Zhiwei
    Han, Ningning
    Wang, Liqiang
    INFORMATION SCIENCES, 2023, 619 : 249 - 262
  • [36] AVA: Inconspicuous Attribute Variation-based Adversarial Attack bypassing DeepFake Detection
    Meng, Xiangtao
    Wang, Li
    Guo, Shanqing
    Ju, Lei
    Zhao, Qingchuan
    45TH IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP 2024, 2024, : 74 - 90
  • [37] Understanding adversarial robustness against on-manifold adversarial examples
    Xiao, Jiancong
    Yang, Liusha
    Fan, Yanbo
    Wang, Jue
    Luo, Zhi-Quan
    PATTERN RECOGNITION, 2025, 159
  • [38] Adversarial Magnification to Deceive Deepfake Detection Through Super Resolution
    Coccomini, Davide Alessandro
    Caldelli, Roberto
    Amato, Giuseppe
    Falchi, Fabrizio
    Gennaro, Claudio
    MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2023, PT II, 2025, 2134 : 491 - 501
  • [39] Key-Based Input Transformation Defense Against Adversarial Examples
    Qin, Yi
    Yue, Chuan
    2021 IEEE INTERNATIONAL PERFORMANCE, COMPUTING, AND COMMUNICATIONS CONFERENCE (IPCCC), 2021,
  • [40] ON THE TRANSFERABILITY OF ADVERSARIAL EXAMPLES AGAINST CNN-BASED IMAGE FORENSICS
    Barni, M.
    Kallas, K.
    Nowroozi, E.
    Tondi, B.
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 8286 - 8290