ExplaNET: A Descriptive Framework for Detecting Deepfakes With Interpretable Prototypes

被引:0
|
作者
Khalid, Fatima [1 ]
Javed, Ali [2 ]
Malik, Khalid Mahmood [3 ]
Irtaza, Aun [4 ]
机构
[1] GIK Inst Engn Sci & Technol, Fac Comp Sci & Engn, Topi 23460, Pakistan
[2] Univ Engn & Technol Taxila, Dept Software Engn, Taxila 47050, Pakistan
[3] Univ Michigan, Coll Innovat & Technol, Flint, MI 48502 USA
[4] Univ Engn & Technol Taxila, Dept Comp Sci, Taxila 47050, Pakistan
基金
美国国家科学基金会;
关键词
Deepfakes; Prototypes; Feature extraction; Decision making; Training; Biometrics (access control); Computer architecture; Deepfakes detection; DFDC; ExplaNET; explainability; FaceForensics plus plus; interpretability; prototype learning; xAI;
D O I
10.1109/TBIOM.2024.3407650
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The emergence of deepfake videos presents a significant challenge to the integrity of visual content, with potential implications for public opinion manipulation, deception of individuals or groups, and defamation, among other concerns. Traditional methods for detecting deepfakes rely on deep learning models, lacking transparency and interpretability. To instill confidence in AI-based deepfake detection among forensic experts, we introduce a novel method called ExplaNET, which utilizes interpretable and explainable prototypes to detect deepfakes. By employing prototype-based learning, we generate a collection of representative images that encapsulate the essential characteristics of both real and deepfake images. These prototypes are then used to explain the decision-making process of our model, offering insights into the key features crucial for deepfake detection. Subsequently, we utilize these prototypes to train a classification model that achieves both accuracy and interpretability in deepfake detection. We also employ the Grad-CAM technique to generate heatmaps, highlighting the image regions contributing most significantly to the decision-making process. Through experiments conducted on datasets like FaceForensics++, Celeb-DF, and DFDC-P, our method demonstrates superior performance compared to state-of-the-art techniques in deepfake detection. Furthermore, the interpretability and explainability intrinsic to our method enhance its trustworthiness among forensic experts, owing to the transparency of our model.
引用
收藏
页码:486 / 497
页数:12
相关论文
共 50 条
  • [21] Deformable ProtoPNet: An Interpretable Image Classifier Using Deformable Prototypes
    Donnelly, Jon
    Barnett, Alina Jade
    Chen, Chaofan
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 10255 - 10265
  • [22] This Looks Like That, Because ... Explaining Prototypes for Interpretable Image Recognition
    Nauta, Meike
    Jutte, Annemarie
    Provoost, Jesper
    Seifert, Christin
    MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2021, PT I, 2021, 1524 : 441 - 456
  • [23] Learning Transferable Conceptual Prototypes for Interpretable Unsupervised Domain Adaptation
    Gao, Junyu
    Ma, Xinhong
    Xu, Changsheng
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 5284 - 5297
  • [24] Qualitative failures of image generation models and their application in detecting deepfakes
    Borji, Ali
    IMAGE AND VISION COMPUTING, 2023, 137
  • [25] Beyond the Spectrum: Detecting Deepfakes via Re-Synthesis
    He, Yang
    Yu, Ning
    Keuper, Margret
    Fritz, Mario
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 2534 - 2541
  • [26] Detecting Deepfakes in Alternative Color Spaces to Withstand Unseen Corruptions
    Zeng, Kai
    Yu, Xiangyu
    Liu, Beibei
    Guan, Yu
    Hu, Yongjian
    2023 11TH INTERNATIONAL WORKSHOP ON BIOMETRICS AND FORENSICS, IWBF, 2023,
  • [27] Detecting Deepfakes Using GAN Manipulation Defects in Human Eyes
    Tchaptchet, Elisabeth
    Tagne, Elie Fute
    Acosta, Jaime
    Danda, Rawat
    Kamhoua, Charles
    2024 INTERNATIONAL CONFERENCE ON COMPUTING, NETWORKING AND COMMUNICATIONS, ICNC, 2024, : 456 - 462
  • [28] The Rise of Deepfakes: A Conceptual Framework and Research Agenda for Marketing
    Whittaker, Lucas
    Letheren, Kate
    Mulcahy, Rory
    AUSTRALASIAN MARKETING JOURNAL, 2021, 29 (03): : 204 - 214
  • [29] MCPNet: An Interpretable Classifier via Multi-Level Concept Prototypes
    Wang, Bor-Shiun
    Wang, Chien-Yi
    Chiu, Wei-Chen
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 10885 - 10894
  • [30] Detecting low-resolution deepfakes: an exploration of machine learning techniques
    Pandey, Mayank
    Singh, Samayveer
    Malik, Aruna
    Kumar, Rajeev
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (25) : 66283 - 66298