Detection of deepfake technology in images and videos

被引:0
|
作者
Liu, Yong [1 ]
Sun, Tianning [2 ]
Wang, Zonghui [3 ]
Zhao, Xu [1 ]
Cheng, Ruosi [1 ]
Shi, Baolan [4 ]
机构
[1] PLA Strateg Support Force Informat Engn Univ, Coll Cyberspace Secur, Zhengzhou 450001, Henan, Peoples R China
[2] Zhejiang Lab, Res Inst Intelligent Networks, Hangzhou 311121, Zhejiang, Peoples R China
[3] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310027, Zhejiang, Peoples R China
[4] Univ Colorado Boulder, Coll Engn & Appl Sci, Boulder, CO 80309 USA
关键词
deepfake technology; fake image and video detection; transfer learning; parameter quantity; detection across datasets;
D O I
10.1504/IJAHUC.2024.136851
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In response to the low accuracy, weak generalisation, and insufficient consideration of cross-dataset detection in deepfake images and videos, this article adopted the miniXception and long short-term memory (LSTM) combination model to analyse deepfake images and videos. First, the miniXception model was adopted as the backbone network to fully extract spatial features. Secondly, by using LSTM to extract temporal features between two frames, this paper introduces temporal and spatial attention mechanisms after the convolutional layer to better capture long-distance dependencies in the sequence and improve the detection accuracy of the model. Last, cross-dataset training and testing were conducted using the same database and transfer learning method. Focal loss was employed as the loss function in the training model stage to balance the samples and improve the generalisation of the model. The experimental results showed that the detection accuracy on the FaceSwap dataset reached 99.05%, which was 0.39% higher than the convolutional neural network-gated recurrent unit (CNN-GRU) and that the model parameter quantity only needed 10.01 MB, improving the generalisation ability and detection accuracy of the model.
引用
收藏
页码:135 / 148
页数:15
相关论文
共 50 条
  • [1] DeepFake Detection for Human Face Images and Videos: A Survey
    Malik, Asad
    Kuribayashi, Minoru
    Abdullahi, Sani M.
    Khan, Ahmad Neyaz
    IEEE ACCESS, 2022, 10 : 18757 - 18775
  • [2] Security Strengthen and Detection of Deepfake Videos and Images Using Deep Learning Techniques
    Talreja, Sumran
    Bindle, Abhay
    Kumar, Vimal
    Budhiraja, Ishan
    Bhattacharya, Pronaya
    2024 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS, ICC WORKSHOPS 2024, 2024, : 1834 - 1839
  • [3] ResViT: A Framework for Deepfake Videos Detection
    Ahmad, Wasim
    Ali, Imad
    Shahzad, Sahibzada Adil
    Hashmi, Ammarah
    Ghaffar, Faisal
    INTERNATIONAL JOURNAL OF ELECTRICAL AND COMPUTER ENGINEERING SYSTEMS, 2022, 13 (09) : 807 - 813
  • [4] Deepfake Videos in the Wild: Analysis and Detection
    Pu, Jiameng
    Mangaokar, Neal
    Kelly, Lauren
    Bhattacharya, Parantapa
    Sundaram, Kavya
    Javed, Mobin
    Wang, Bolun
    Viswanath, Bimal
    PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE 2021 (WWW 2021), 2021, : 981 - 992
  • [5] Vulnerability assessment and detection of Deepfake videos
    Korshunov, Pavel
    Marcel, Sehastien
    2019 INTERNATIONAL CONFERENCE ON BIOMETRICS (ICB), 2019,
  • [6] Cascaded-Hop For DeepFake Videos Detection
    Zhang, Dengyong
    Wu, Pengjie
    Li, Feng
    Zhu, Wenjie
    Sheng, Victor S.
    KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS, 2022, 16 (05): : 1671 - 1686
  • [7] DeepFake Videos Detection Using Crowd Computing
    Salini Y.
    HariKiran J.
    International Journal of Information Technology, 2024, 16 (7) : 4547 - 4564
  • [8] DeepFake Videos Detection Based on Texture Features
    Xu, Bozhi
    Liu, Jiarui
    Liang, Jifan
    Lu, Wei
    Zhang, Yue
    CMC-COMPUTERS MATERIALS & CONTINUA, 2021, 68 (01): : 1375 - 1388
  • [9] Deepfake videos: Synthesis and detection techniques - A survey
    Saif, Shahela
    Tehseen, Samabia
    Journal of Intelligent and Fuzzy Systems, 2022, 42 (04): : 2989 - 3009
  • [10] FORGERY DETECTION OF LOW QUALITY DEEPFAKE VIDEOS
    Sohaib, M.
    Tehseen, S.
    NEURAL NETWORK WORLD, 2023, 33 (02) : 85 - 99