CSTAN: A Deepfake Detection Network with CST Attention for Superior Generalization

被引:0
|
作者
Yang, Rui [1 ,2 ]
You, Kang [2 ]
Pang, Cheng [1 ]
Luo, Xiaonan [1 ,2 ]
Lan, Rushi [1 ,3 ]
机构
[1] Guilin Univ Elect Technol, Guangxi Key Lab Image & G Intelligent Proc, Guilin 541004, Peoples R China
[2] Guilin Univ Elect Technol, Sch Comp Sci & Informat Secur, Guilin 541004, Peoples R China
[3] Guilin Univ Elect Technol, Int Joint Res Lab Spatio Temporal Informat Intelli, Guilin 541004, Peoples R China
基金
中国国家自然科学基金;
关键词
deepfake detection; attention mechanism; detection model; feature extraction;
D O I
10.3390/s24227101
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
With the advancement of deepfake forgery technology, highly realistic fake faces have posed serious security risks to sensor-based facial recognition systems. Recent deepfake detection models mainly use binary classification models based on deep learning. Despite achieving high detection accuracy on intra-datasets, these models lack generalization ability when applied to cross-datasets. We propose a deepfake detection model named Channel-Spatial-Triplet Attention Network (CSTAN), which focuses on the difference between real and fake features, thereby enhancing the generality of the detection model. To enhance the feature-learning ability of the model for image forgery regions, we have designed the Channel-Spatial-Triplet (CST) attention mechanism, which extracts subtle local information by capturing feature channels and the spatial correlation of three different scales. Additionally, we propose a novel feature extraction method, OD-ResNet-34, by embedding ODConv into the feature extraction network to enhance its dynamic adaptability to data features. Trained on the FF++ dataset and tested on the Celeb-DF-v1 and Celeb-DF-v2 datasets, the experimental results show that our model has stronger generalization ability in cross-datasets than similar models.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] Frequency Domain Filtered Residual Network for Deepfake Detection
    Wang, Bo
    Wu, Xiaohan
    Tang, Yeling
    Ma, Yanyan
    Shan, Zihao
    Wei, Fei
    MATHEMATICS, 2023, 11 (04)
  • [42] Attention Guided Spatio-Temporal Artifacts Extraction for Deepfake Detection
    Wang, Zhibing
    Li, Xin
    Ni, Rongrong
    Zhao, Yao
    PATTERN RECOGNITION AND COMPUTER VISION, PT IV, 2021, 13022 : 374 - 386
  • [43] An ensemble of CNNs with self-attention mechanism for DeepFake video detection
    Karima Omar
    Rasha H. Sakr
    Mohammed F. Alrahmawy
    Neural Computing and Applications, 2024, 36 : 2749 - 2765
  • [44] Local attention and long-distance interaction of rPPG for deepfake detection
    Jiahui Wu
    Yu Zhu
    Xiaoben Jiang
    Yatong Liu
    Jiajun Lin
    The Visual Computer, 2024, 40 (2) : 1083 - 1094
  • [45] Local attention and long-distance interaction of rPPG for deepfake detection
    Wu, Jiahui
    Zhu, Yu
    Jiang, Xiaoben
    Liu, Yatong
    Lin, Jiajun
    VISUAL COMPUTER, 2024, 40 (02): : 1083 - 1094
  • [46] DEEPFAKE SATELLITE IMAGERY DETECTION WITH MULTI-ATTENTION AND SUPER RESOLUTION
    Ciftci, Umur Aybars
    Demir, Ilke
    IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2023, : 4871 - 4874
  • [47] An ensemble of CNNs with self-attention mechanism for DeepFake video detection
    Omar, Karima
    Sakr, Rasha H.
    Alrahmawy, Mohammed F.
    Neural Computing and Applications, 2024, 36 (06) : 2749 - 2765
  • [48] A Robust Lightweight Deepfake Detection Network Using Transformers
    Zhang, Yaning
    Wang, Tianyi
    Shu, Minglei
    Wang, Yinglong
    PRICAI 2022: TRENDS IN ARTIFICIAL INTELLIGENCE, PT I, 2022, 13629 : 275 - 288
  • [49] An ensemble of CNNs with self-attention mechanism for DeepFake video detection
    Omar, Karima
    Sakr, Rasha H.
    Alrahmawy, Mohammed F.
    NEURAL COMPUTING & APPLICATIONS, 2024, 36 (06): : 2749 - 2765
  • [50] DeepFake Detection with Remote Heart Rate Estimation Using 3D Central Difference Convolution Attention Network
    Feng X.
    Ma H.
    Sun Y.
    Recent Advances in Computer Science and Communications, 2023, 16 (07) : 34 - 42