ExplaNET: A Descriptive Framework for Detecting Deepfakes With Interpretable Prototypes

被引:0
|
作者
Khalid, Fatima [1 ]
Javed, Ali [2 ]
Malik, Khalid Mahmood [3 ]
Irtaza, Aun [4 ]
机构
[1] GIK Inst Engn Sci & Technol, Fac Comp Sci & Engn, Topi 23460, Pakistan
[2] Univ Engn & Technol Taxila, Dept Software Engn, Taxila 47050, Pakistan
[3] Univ Michigan, Coll Innovat & Technol, Flint, MI 48502 USA
[4] Univ Engn & Technol Taxila, Dept Comp Sci, Taxila 47050, Pakistan
基金
美国国家科学基金会;
关键词
Deepfakes; Prototypes; Feature extraction; Decision making; Training; Biometrics (access control); Computer architecture; Deepfakes detection; DFDC; ExplaNET; explainability; FaceForensics plus plus; interpretability; prototype learning; xAI;
D O I
10.1109/TBIOM.2024.3407650
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The emergence of deepfake videos presents a significant challenge to the integrity of visual content, with potential implications for public opinion manipulation, deception of individuals or groups, and defamation, among other concerns. Traditional methods for detecting deepfakes rely on deep learning models, lacking transparency and interpretability. To instill confidence in AI-based deepfake detection among forensic experts, we introduce a novel method called ExplaNET, which utilizes interpretable and explainable prototypes to detect deepfakes. By employing prototype-based learning, we generate a collection of representative images that encapsulate the essential characteristics of both real and deepfake images. These prototypes are then used to explain the decision-making process of our model, offering insights into the key features crucial for deepfake detection. Subsequently, we utilize these prototypes to train a classification model that achieves both accuracy and interpretability in deepfake detection. We also employ the Grad-CAM technique to generate heatmaps, highlighting the image regions contributing most significantly to the decision-making process. Through experiments conducted on datasets like FaceForensics++, Celeb-DF, and DFDC-P, our method demonstrates superior performance compared to state-of-the-art techniques in deepfake detection. Furthermore, the interpretability and explainability intrinsic to our method enhance its trustworthiness among forensic experts, owing to the transparency of our model.
引用
收藏
页码:486 / 497
页数:12
相关论文
共 50 条
  • [41] Interpretable Image Recognition by Screening Class-Specific and Class-Shared Prototypes
    Li, Xiaomeng
    Wang, Jiaqi
    Jing, Liping
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT II, 2023, 14255 : 397 - 408
  • [42] PIP-Net: Patch-Based Intuitive Prototypes for Interpretable Image Classification
    Nauta, Meike
    Schloetterer, Joerg
    van Keulen, Maurice
    Seifert, Christin
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 2744 - 2753
  • [43] INTERPRETABLE MODELS FOR DETECTING AND MONITORING ELEVATED INTRACRANIAL PRESSURE
    Hannan, Darryl
    Nesbit, Steven C.
    Wen, Ximing
    Smith, Glen
    Zhang, Qiao
    Goffi, Alberto
    Chan, Vincent
    Morris, Michael J.
    Hunninghake, John C.
    Villalobos, Nicholas E.
    Kim, Edward
    Weber, Rosina O.
    MacLellan, Christopher J.
    IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI 2024, 2024,
  • [44] Detecting Interpretable and Accurate Scale-Invariant Keypoints
    Foerstner, Wolfgang
    Dickscheid, Timo
    Schindler, Falko
    2009 IEEE 12TH INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2009, : 2256 - 2263
  • [45] Detecting Depression Severity by Interpretable Representations of Motion Dynamics
    Kacem, Anis
    Hammal, Zakia
    Daoudi, Mohamed
    Cohn, Jeffrey
    PROCEEDINGS 2018 13TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION (FG 2018), 2018, : 739 - 745
  • [46] Evaluation of two VIDAS ® prototypes for detecting anti-HEV
    Abravanel, Florence
    Goutagny, Nadege
    Perret, Corinne
    Lhomme, Sebastien
    Vischi, Francoise
    Aversenq, Alexandre
    Chapel, Aude
    Dehainault, Nathalie
    Piga, Nadia
    Dupret-Carruel, Jacqueline
    Izopet, Jacques
    JOURNAL OF CLINICAL VIROLOGY, 2017, 89 : 46 - 50
  • [47] RE: "A FRAMEWORK FOR DESCRIPTIVE EPIDEMIOLOGY"
    Karp, Igor
    AMERICAN JOURNAL OF EPIDEMIOLOGY, 2023, 192 (04) : 684 - 684
  • [48] MetaCluster: A Universal Interpretable Classification Framework for Cybersecurity
    Ge, Wenhan
    Cui, Zeyuan
    Wang, Junfeng
    Tang, Binhui
    Li, Xiaohui
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 3829 - 3843
  • [49] GDRL: An interpretable framework for thoracic pathologic prediction
    Wu, Yirui
    Li, Hao
    Feng, Xi
    Casanova, Andrea
    Abate, Andrea F.
    Wan, Shaohua
    PATTERN RECOGNITION LETTERS, 2023, 165 : 154 - 160
  • [50] An ensemble framework for interpretable malicious code detection
    Cheng, Jieren
    Zheng, Jiachen
    Yu, Xiaomei
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (12) : 10100 - 10117