DeepFake detection method based on multi-scale interactive dual-stream network

被引:0
|
作者
Cheng, Ziyuan [1 ]
Wang, Yiyang [1 ]
Wan, Yongjing [1 ]
Jiang, Cuiling [1 ]
机构
[1] East China Univ Sci & Technol, Sch Informat Sci & Engn, Shanghai 200237, Peoples R China
关键词
DeepFake detection; Multi-scale fusion; Interactive dual-stream; High-frequency noise; MANIPULATION;
D O I
10.1016/j.jvcir.2024.104263
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
DeepFake face forgery has a serious negative impact on both society and individuals. Therefore, research on DeepFake detection technologies is necessary. At present, DeepFake detection technology based on deep learning has achieved acceptable results on high-quality datasets; however, its detection performance on low-quality datasets and cross-datasets remains poor. To address this problem, this paper presents a multi-scale interactive dual-stream network (MSIDSnet). The network is divided into spatial- and frequency-domain streams and uses a multi-scale fusion module to capture both the facial features of images that have been manipulated in the spatial domain under different circumstances and the fine-grained high-frequency noise information of forged images. The network fully integrates the features of the spatial- and frequency-domain streams through an interactive dual-stream module and uses vision transformer (ViT) to further learn the global information of the forged facial features for classification. Experimental results confirm that the accuracy of this method reached 99.30 % on the high-quality dataset Celeb-DF-v2, and 95.51 % on the low-quality dataset FaceForensics++. Moreover, the results of the cross-dataset experiments were superior to those of the other comparison methods.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] Multichannel InSAR elevation reconstruction method based on dual-stream network
    Xie, Xianming
    Geng, Dianqiang
    Hou, Guozheng
    Zeng, Qingning
    Zheng, Zhanheng
    OPTICS AND LASERS IN ENGINEERING, 2024, 172
  • [22] MULTI-SCALE PERMUTATION ENTROPY FOR AUDIO DEEPFAKE DETECTION
    Wang, Chenglong
    He, Jiayi
    Yi, Jiangyan
    Tao, Jianhua
    Zhang, Chu Yuan
    Zhang, Xiaohui
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 1406 - 1410
  • [23] Dual-stream autoencoder for channel-level multi-scale feature extraction in hyperspectral unmixing
    Gan, Yuquan
    Wang, Yong
    Li, Qiuyu
    Luo, Yiming
    Wang, Yihong
    Pan, Yushan
    Knowledge-Based Systems, 2025, 317
  • [24] DeepFake detection with multi-scale convolution and vision transformer
    Lin, Hao
    Huang, Wenmin
    Luo, Weiqi
    Lu, Wei
    DIGITAL SIGNAL PROCESSING, 2023, 134
  • [25] Dual-Stream Feature Fusion Network for Detection and ReID in Multi-object Tracking
    He, Qingyou
    Li, Liangqun
    PRICAI 2022: TRENDS IN ARTIFICIAL INTELLIGENCE, PT I, 2022, 13629 : 247 - 260
  • [26] Detection Method of Epileptic Seizures Using a Neural Network Model Based on Multimodal Dual-Stream Networks
    Wang, Baiyang
    Xu, Yidong
    Peng, Siyu
    Wang, Hongjun
    Li, Fang
    SENSORS, 2024, 24 (11)
  • [27] Interactive Two-Stream Network Across Modalities for Deepfake Detection
    Wu, Jianghao
    Zhang, Baopeng
    Li, Zhaoyang
    Pang, Guilin
    Teng, Zhu
    Fan, Jianping
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (11) : 6418 - 6430
  • [28] Deep video inpainting detection and localization based on ConvNeXt dual-stream network
    Yao, Ye
    Han, Tingfeng
    Gao, Xudong
    Ren, Yizhi
    Meng, Weizhi
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 247
  • [29] A multi-domain dual-stream network for hyperspectral unmixing
    Hu, Jiwei
    Wang, Tianhao
    Jin, Qiwen
    Peng, Chengli
    Liu, Quan
    INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION, 2024, 135
  • [30] Enhanced salient object detection in remote sensing images via dual-stream semantic interactive network
    Ge, Yanliang
    Liang, Taichuan
    Ren, Junchao
    Chen, Jiaxue
    Bi, Hongbo
    VISUAL COMPUTER, 2024,