Unmasking Deception: Empowering Deepfake Detection with Vision Transformer Network

被引:5
|
作者
Arshed, Muhammad Asad [1 ,2 ]
Alwadain, Ayed [3 ]
Ali, Rao Faizan [2 ]
Mumtaz, Shahzad [4 ]
Ibrahim, Muhammad [1 ]
Muneer, Amgad [5 ,6 ]
机构
[1] Islamia Univ Bahawalpur, Dept Comp Sci, Bahawalpur 63100, Pakistan
[2] Univ Management & Technol, Sch Syst & Technol, Lahore 54770, Pakistan
[3] King Saud Univ, Community Coll, Comp Sci Dept, Riyadh 145111, Saudi Arabia
[4] Islamia Univ Bahawalpur, Dept Data Sci, Bahawalpur 63100, Pakistan
[5] Univ Texas MD Anderson Canc Ctr, Dept Imaging Phys, Houston, TX 77030 USA
[6] Univ Teknol Petronas, Dept Comp & Informat Sci, Seri Iskandar 32160, Malaysia
关键词
deepfake; identification; Vision Transformer; pretrained; fine tuning;
D O I
10.3390/math11173710
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
With the development of image-generating technologies, significant progress has been made in the field of facial manipulation techniques. These techniques allow people to easily modify media information, such as videos and images, by substituting the identity or facial expression of one person with the face of another. This has significantly increased the availability and accessibility of such tools and manipulated content termed 'deepfakes'. Developing an accurate method for detecting fake images needs time to prevent their misuse and manipulation. This paper examines the capabilities of the Vision Transformer (ViT), i.e., extracting global features to detect deepfake images effectively. After conducting comprehensive experiments, our method demonstrates a high level of effectiveness, achieving a detection accuracy, precision, recall, and F1 rate of 99.5 to 100% for both the original and mixture data set. According to our existing understanding, this study is a research endeavor incorporating real-world applications, specifically examining Snapchat-filtered images.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Hybrid Transformer Network for Deepfake Detection
    Khan, Sohail Ahmed
    Dang-Nguyen, Duc-Tien
    [J]. 19TH INTERNATIONAL CONFERENCE ON CONTENT-BASED MULTIMEDIA INDEXING, CBMI 2022, 2022, : 8 - 14
  • [2] DeepFake detection algorithm based on improved vision transformer
    Heo, Young-Jin
    Yeo, Woon-Ha
    Kim, Byung-Gyu
    [J]. APPLIED INTELLIGENCE, 2023, 53 (07) : 7512 - 7527
  • [3] Efficient deepfake detection using shallow vision transformer
    Usmani, Shaheen
    Kumar, Sunil
    Sadhya, Debanjan
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (04) : 12339 - 12362
  • [4] Efficient deepfake detection using shallow vision transformer
    Shaheen Usmani
    Sunil Kumar
    Debanjan Sadhya
    [J]. Multimedia Tools and Applications, 2024, 83 : 12339 - 12362
  • [5] Deepfake Image Detection using Vision Transformer Models
    Ghita, Bogdan
    Kuzminykh, Ievgeniia
    Usama, Abubakar
    Bakhshi, Taimur
    Marchang, Jims
    [J]. 2024 IEEE INTERNATIONAL BLACK SEA CONFERENCE ON COMMUNICATIONS AND NETWORKING, BLACKSEACOM 2024, 2024, : 332 - 335
  • [6] DeepFake detection algorithm based on improved vision transformer
    Young-Jin Heo
    Woon-Ha Yeo
    Byung-Gyu Kim
    [J]. Applied Intelligence, 2023, 53 : 7512 - 7527
  • [7] DeepFake detection with multi-scale convolution and vision transformer
    Lin, Hao
    Huang, Wenmin
    Luo, Weiqi
    Lu, Wei
    [J]. DIGITAL SIGNAL PROCESSING, 2023, 134
  • [8] Improved Deepfake Video Detection Using Convolutional Vision Transformer
    Deressa, Deressa Wodajo
    Lambert, Peter
    Van Wallendael, Glenn
    Atnafu, Solomon
    Mareen, Hannes
    [J]. 2024 IEEE GAMING, ENTERTAINMENT, AND MEDIA CONFERENCE, GEM 2024, 2024, : 492 - 497
  • [9] CLIPping the Deception: Adapting Vision-Language Models for Universal Deepfake Detection
    Khan, Sohail Ahmed
    Duc-Tien Dang-Nguyen
    [J]. PROCEEDINGS OF THE 4TH ANNUAL ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2024, 2024, : 1006 - 1015
  • [10] Cascaded Network Based on EfficientNet and Transformer for Deepfake Video Detection
    Liwei Deng
    Jiandong Wang
    Zhen Liu
    [J]. Neural Processing Letters, 2023, 55 : 7057 - 7076