Cascaded Network Based on EfficientNet and Transformer for Deepfake Video Detection

被引:0
|
作者
Liwei Deng
Jiandong Wang
Zhen Liu
机构
[1] Harbin University of Science and Technology,Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation
来源
Neural Processing Letters | 2023年 / 55卷
关键词
Deepfake detection; EfficientNetV2S; Transformer; Visualization;
D O I
暂无
中图分类号
学科分类号
摘要
With the continuous development of deepfake technology, forged videos are continuously released on various network media, which facilitates people’s lives but also brings great negative impacts. And these forged videos have a high degree of authenticity, which brings great challenges to detection. However, most deepfake detection models now only focus on model design, lacking versatility. To address these issues, we proposed a cascaded Network based on EfficientNet and Transformer to achieve deepfake detection tasks. The improved convolutional neural network EfficientNetV2S is used as a feature extractor, the features are input to the Transformer, and its attention mechanism is used for classification. And used Spatial-Reduction Attention (SRA) to improve the traditional attention mechanism in Transformer. And we carefully extracted and screened real and fake faces in the preprocessing stage and trained our model on DFDC and FaceForensics++ benchmarks, achieving state-of-the-art results such as an accuracy of 92.16% and an accuracy of 96.75%, respectively. Finally, we also achieved excellent visualization results on deepfake videos, proving the practicability of our method.
引用
收藏
页码:7057 / 7076
页数:19
相关论文
共 50 条
  • [1] Cascaded Network Based on EfficientNet and Transformer for Deepfake Video Detection
    Deng, Liwei
    Wang, Jiandong
    Liu, Zhen
    [J]. NEURAL PROCESSING LETTERS, 2023, 55 (06) : 7057 - 7076
  • [2] Deepfake Video Detection Based on EfficientNet-V2 Network
    Deng, Liwei
    Suo, Hongfei
    Li, Dongjie
    [J]. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2022, 2022
  • [3] Combining EfficientNet and Vision Transformers for Video Deepfake Detection
    Coccomini, Davide Alessandro
    Messina, Nicola
    Gennaro, Claudio
    Falchi, Fabrizio
    [J]. IMAGE ANALYSIS AND PROCESSING, ICIAP 2022, PT III, 2022, 13233 : 219 - 229
  • [4] Deepfake Video Detection with Spatiotemporal Dropout Transformer
    Zhang, Daichi
    Lin, Fanzhao
    Hua, Yingying
    Wang, Pengju
    Zeng, Dan
    Ge, Shiming
    [J]. PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 5833 - 5841
  • [5] Hybrid Transformer Network for Deepfake Detection
    Khan, Sohail Ahmed
    Dang-Nguyen, Duc-Tien
    [J]. 19TH INTERNATIONAL CONFERENCE ON CONTENT-BASED MULTIMEDIA INDEXING, CBMI 2022, 2022, : 8 - 14
  • [6] Video Transformer for Deepfake Detection with Incremental Learning
    Khan, Sohail Ahmed
    Dai, Hang
    [J]. PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 1821 - 1828
  • [7] Improved Deepfake Video Detection Using Convolutional Vision Transformer
    Deressa, Deressa Wodajo
    Lambert, Peter
    Van Wallendael, Glenn
    Atnafu, Solomon
    Mareen, Hannes
    [J]. 2024 IEEE GAMING, ENTERTAINMENT, AND MEDIA CONFERENCE, GEM 2024, 2024, : 492 - 497
  • [8] MSVT: Multiple Spatiotemporal Views Transformer for DeepFake Video Detection
    Yu, Yang
    Ni, Rongrong
    Zhao, Yao
    Yang, Siyuan
    Xia, Fen
    Jiang, Ning
    Zhao, Guoqing
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (09) : 4462 - 4471
  • [9] ConTrans-Detect: A Multi-Scale Convolution-Transformer Network for DeepFake Video Detection
    Sun, Weirong
    Ma, Yujun
    Zhang, Hong
    Wang, Ruili
    [J]. 2023 29TH INTERNATIONAL CONFERENCE ON MECHATRONICS AND MACHINE VISION IN PRACTICE, M2VIP 2023, 2023,
  • [10] ISTVT: Interpretable Spatial-Temporal Video Transformer for Deepfake Detection
    Zhao, Cairong
    Wang, Chutian
    Hu, Guosheng
    Chen, Haonan
    Liu, Chun
    Tang, Jinhui
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 1335 - 1348