Comprehensive Evaluation of Deepfake Detection Models: Accuracy, Generalization, and Resilience to Adversarial Attacks

被引:0
|
作者
Abbasi, Maryam [1 ]
Vaz, Paulo [2 ]
Silva, Jose [2 ]
Martins, Pedro [2 ]
机构
[1] Polytech Coimbra, Appl Res Inst, P-3045093 Coimbra, Portugal
[2] Polytech Viseu, Res Ctr Digital Serv CISeD, P-3504510 Viseu, Portugal
来源
APPLIED SCIENCES-BASEL | 2025年 / 15卷 / 03期
关键词
deepfakes; deep learning; XCeption; ResNet; VGG; DFDC; FaceForensics plus plus; adversarial robustness; detection models; MANIPULATION;
D O I
10.3390/app15031225
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
The rise of deepfakes-synthetic media generated using artificial intelligence-threatens digital content authenticity, facilitating misinformation and manipulation. However, deepfakes can also depict real or entirely fictitious individuals, leveraging state-of-the-art techniques such as generative adversarial networks (GANs) and emerging diffusion-based models. Existing detection methods face challenges with generalization across datasets and vulnerability to adversarial attacks. This study focuses on subsets of frames extracted from the DeepFake Detection Challenge (DFDC) and FaceForensics++ videos to evaluate three convolutional neural network architectures-XCeption, ResNet, and VGG16-for deepfake detection. Performance metrics include accuracy, precision, F1-score, AUC-ROC, and Matthews Correlation Coefficient (MCC), combined with an assessment of resilience to adversarial perturbations via the Fast Gradient Sign Method (FGSM). Among the tested models, XCeption achieves the highest accuracy (89.2% on DFDC), strong generalization, and real-time suitability, while VGG16 excels in precision and ResNet provides faster inference. However, all models exhibit reduced performance under adversarial conditions, underscoring the need for enhanced resilience. These findings indicate that robust detection systems must consider advanced generative approaches, adversarial defenses, and cross-dataset adaptation to effectively counter evolving deepfake threats.
引用
收藏
页数:16
相关论文
共 50 条
  • [21] A Systematic Evaluation of Adversarial Attacks against Speech Emotion Recognition Models
    Facchinetti, Nicolas
    Simonetta, Federico
    Ntalampiras, Stavros
    Intelligent Computing, 2024, 3
  • [22] Performance Evaluation of Adversarial Attacks on Whole-Graph Embedding Models
    Manzo, Mario
    Giordano, Maurizio
    Maddalena, Lucia
    Guarracino, Mario R.
    LEARNING AND INTELLIGENT OPTIMIZATION, LION 15, 2021, 12931 : 219 - 236
  • [23] A Comprehensive Feature Importance Evaluation for DDoS Attacks Detection
    Zhou, Lu
    Zhu, Ye
    Xiang, Yong
    ADVANCED DATA MINING AND APPLICATIONS, ADMA 2021, PT I, 2022, 13087 : 353 - 367
  • [24] Adversarial Attacks and Defenses in Fault Detection and Diagnosis: A Comprehensive Benchmark on the Tennessee Eastman Process
    Pozdnyakov, Vitaliy
    Kovalenko, Aleksandr
    Makarov, Ilya
    Drobyshevskiy, Mikhail
    Lukyanov, Kirill
    IEEE OPEN JOURNAL OF THE INDUSTRIAL ELECTRONICS SOCIETY, 2024, 5 : 428 - 440
  • [25] Eroding Trust In Aerial Imagery: Comprehensive Analysis and Evaluation Of Adversarial Attacks In Geospatial Systems
    Lanier, Michael
    Dhakal, Aayush
    Xiong, Zhexiao
    Li, Arthur
    Jacobs, Nathan
    Vorobeychik, Yevgeniy
    arXiv, 2023,
  • [26] Reinforcement-learning-based Adversarial Attacks Against Vulnerability Detection Models
    Chen, Si-Ran
    Wu, Jing-Zheng
    Ling, Xiang
    Luo, Tian-Yue
    Liu, Jia-Yu
    Wu, Yan-Jun
    Ruan Jian Xue Bao/Journal of Software, 2024, 35 (08): : 3647 - 3667
  • [27] Adversarial Attacks on Camera-LiDAR Models for 3D Car Detection
    Abdelfattah, Mazen
    Yuan, Kaiwen
    Wang, Z. Jane
    Ward, Rabab
    2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 2189 - 2194
  • [28] Evoattack: suppressive adversarial attacks against object detection models using evolutionary search
    Chan, Kenneth H.
    Cheng, Betty H. C.
    AUTOMATED SOFTWARE ENGINEERING, 2025, 32 (01)
  • [29] Evaluation of the impact of physical adversarial attacks on deep learning models for classifying covid cases
    de Aguiar, Erikson J.
    Marcomini, Karem D.
    Quirino, Felipe A.
    Gutierrez, Marco A.
    Traina, Caetano, Jr.
    Traina, Agma J. M.
    MEDICAL IMAGING 2022: COMPUTER-AIDED DIAGNOSIS, 2022, 12033
  • [30] Baseline Evaluation Methodology for Adversarial Patterns on Object Detection Models
    Holt, Emily
    Malkastian, Anto
    Smith, Savanna
    Ward, Chris
    Harguess, Josh
    2021 IEEE APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP (AIPR), 2021,