Improving Detection of DeepFakes through Facial Region Analysis in Images

被引:1
|
作者
Alanazi, Fatimah [1 ,2 ]
Ushaw, Gary [1 ]
Morgan, Graham [1 ]
机构
[1] Newcastle Univ, Sch Comp, Newcastle Upon Tyne NE1 7RU, England
[2] Univ Hafr Al Batin, Coll Comp Sci & Engn, Hafar Al Batin 39524, Saudi Arabia
关键词
DeepFake detection; face augmentation; face cutout facial recognition; feature fusion; image analysis;
D O I
10.3390/electronics13010126
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In the evolving landscape of digital media, the discipline of media forensics, which encompasses the critical examination and authentication of digital images, videos, and audio recordings, has emerged as an area of paramount importance. This heightened significance is predominantly attributed to the burgeoning concerns surrounding the proliferation of DeepFakes, which are highly realistic and manipulated media content, often created using advanced artificial intelligence techniques. Such developments necessitate a profound understanding and advancement in media forensics to ensure the integrity of digital media in various domains. Current research endeavours are primarily directed towards addressing a common challenge observed in DeepFake datasets, which pertains to the issue of overfitting. Many suggested remedies centre around the application of data augmentation methods, with a frequently adopted strategy being the incorporation of random erasure or cutout. This method entails the random removal of sections from an image to introduce diversity and mitigate overfitting. Generating disparities between the altered and unaltered images serves to inhibit the model from excessively adapting itself to individual samples, thus leading to more favourable results. Nonetheless, the stochastic nature of this approach may inadvertently obscure facial regions that harbour vital information necessary for DeepFake detection. Due to the lack of guidelines on specific regions for cutout, most studies use a randomised approach. However, in recent research, face landmarks have been integrated to designate specific facial areas for removal, even though the selection remains somewhat random. Therefore, there is a need to acquire a more comprehensive insight into facial features and identify which regions hold more crucial data for the identification of DeepFakes. In this study, the investigation delves into the data conveyed by various facial components through the excision of distinct facial regions during the training of the model. The goal is to offer valuable insights to enhance forthcoming face removal techniques within DeepFake datasets, fostering a deeper comprehension among researchers and advancing the realm of DeepFake detection. Our study presents a novel method that uses face cutout techniques to improve understanding of key facial features crucial in DeepFake detection. Moreover, the method combats overfitting in DeepFake datasets by generating diverse images with these techniques, thereby enhancing model robustness. The developed methodology is validated against publicly available datasets like FF++ and Celeb-DFv2. Both face cutout groups surpassed the Baseline, indicating cutouts improve DeepFake detection. Face Cutout Group 2 excelled, with 91% accuracy on Celeb-DF and 86% on the compound dataset, suggesting external facial features' significance in detection. The study found that eyes are most impactful and the nose is least in model performance. Future research could explore the augmentation policy's effect on video-based DeepFake detection.
引用
收藏
页数:22
相关论文
共 50 条
  • [31] A Topological Approach for Facial Region Segmentation in Thermal Images
    Lilley, Michael
    Das, Kapotaksha
    Riani, Kais
    Abouelenien, Mohamed
    2022 IEEE INTERNATIONAL SYMPOSIUM ON MULTIMEDIA (ISM), 2022, : 189 - 193
  • [32] Eye Detection in Facial Images with Unconstrained Background
    Wang, Qiong
    Yang, Jingyu
    JOURNAL OF PATTERN RECOGNITION RESEARCH, 2006, 1 (01): : 55 - 62
  • [33] Detection and localization of facial profiles in color images
    Wang, Jianguo
    Yang, Wankou
    Hua, Jizhao
    Yang, Jingyu
    Journal of Information and Computational Science, 2008, 5 (03): : 1037 - 1044
  • [34] Detection of Facial Features on Color Face Images
    Wang, Hsueh-Wu
    Wu, Ying-Ming
    Lu, Yen-Ling
    Hsiao, Ying-Tung
    INTELLIGENT INFORMATION AND DATABASE SYSTEMS (ACIIDS 2012), PT I, 2012, 7196 : 86 - 101
  • [35] Robust Lip Feature Detection in Facial Images
    Nguyen, Binh T. H.
    Tran Vu Hieu
    Bui Ngoc Dung
    2017 13TH INTERNATIONAL CONFERENCE ON NATURAL COMPUTATION, FUZZY SYSTEMS AND KNOWLEDGE DISCOVERY (ICNC-FSKD), 2017, : 867 - 871
  • [36] Automatic Age Detection based on Facial Images
    Almeida, Veronica
    Dutta, Malay Kishore
    Travieso, Carlos M.
    Singh, Anushikha
    Alonso, Jesus B.
    2016 2ND INTERNATIONAL CONFERENCE ON COMMUNICATION CONTROL AND INTELLIGENT SYSTEMS (CCIS), 2016, : 110 - 114
  • [37] DETECTION OF FACES AND FACIAL FEATURES IN COLOR IMAGES
    Sable, Archana H.
    Jondhale, K. C.
    ICCNT 2009: PROCEEDINGS OF THE 2009 INTERNATIONAL CONFERENCE ON COMPUTER AND NETWORK TECHNOLOGY, 2010, : 241 - 245
  • [38] ON IMPROVING LINE DETECTION IN NOISY IMAGES
    ZABELE, GS
    KOPLOWITZ, J
    COMPUTER GRAPHICS AND IMAGE PROCESSING, 1981, 15 (02): : 130 - 135
  • [39] Improving weapon detection in single energy X-ray images through pseudocoloring
    Abidi, Besma R.
    Zheng, Yue
    Gribok, Andrei V.
    Abidi, Mongi A.
    IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART C-APPLICATIONS AND REVIEWS, 2006, 36 (06): : 784 - 796
  • [40] IMPROVING HARD EXUDATE DETECTION IN RETINAL IMAGES THROUGH A COMBINATION OF LOCAL AND CONTEXTUAL INFORMATION
    Sanchez, C. I.
    Niemeijer, M.
    Schulten, M. S. A. Suttorp
    Abramoff, M.
    van Ginneken, B.
    2010 7TH IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING: FROM NANO TO MACRO, 2010, : 5 - 8