ExplaNET: A Descriptive Framework for Detecting Deepfakes With Interpretable Prototypes

被引:0
|
作者
Khalid, Fatima [1 ]
Javed, Ali [2 ]
Malik, Khalid Mahmood [3 ]
Irtaza, Aun [4 ]
机构
[1] GIK Inst Engn Sci & Technol, Fac Comp Sci & Engn, Topi 23460, Pakistan
[2] Univ Engn & Technol Taxila, Dept Software Engn, Taxila 47050, Pakistan
[3] Univ Michigan, Coll Innovat & Technol, Flint, MI 48502 USA
[4] Univ Engn & Technol Taxila, Dept Comp Sci, Taxila 47050, Pakistan
基金
美国国家科学基金会;
关键词
Deepfakes; Prototypes; Feature extraction; Decision making; Training; Biometrics (access control); Computer architecture; Deepfakes detection; DFDC; ExplaNET; explainability; FaceForensics plus plus; interpretability; prototype learning; xAI;
D O I
10.1109/TBIOM.2024.3407650
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The emergence of deepfake videos presents a significant challenge to the integrity of visual content, with potential implications for public opinion manipulation, deception of individuals or groups, and defamation, among other concerns. Traditional methods for detecting deepfakes rely on deep learning models, lacking transparency and interpretability. To instill confidence in AI-based deepfake detection among forensic experts, we introduce a novel method called ExplaNET, which utilizes interpretable and explainable prototypes to detect deepfakes. By employing prototype-based learning, we generate a collection of representative images that encapsulate the essential characteristics of both real and deepfake images. These prototypes are then used to explain the decision-making process of our model, offering insights into the key features crucial for deepfake detection. Subsequently, we utilize these prototypes to train a classification model that achieves both accuracy and interpretability in deepfake detection. We also employ the Grad-CAM technique to generate heatmaps, highlighting the image regions contributing most significantly to the decision-making process. Through experiments conducted on datasets like FaceForensics++, Celeb-DF, and DFDC-P, our method demonstrates superior performance compared to state-of-the-art techniques in deepfake detection. Furthermore, the interpretability and explainability intrinsic to our method enhance its trustworthiness among forensic experts, owing to the transparency of our model.
引用
收藏
页码:486 / 497
页数:12
相关论文
共 50 条
  • [31] SLEEPER: interpretable Sleep staging via Prototypes from Expert Rules
    Al-Hussaini, Irfan
    Xiao, Cao
    Westover, M. Brandon
    Sun, Jimeng
    MACHINE LEARNING FOR HEALTHCARE CONFERENCE, VOL 106, 2019, 106
  • [32] Development of prototypes and descriptive terms of fruit complements for poultry products
    Parks, SS
    Lyon, BG
    Wicker, L
    JOURNAL OF FOOD QUALITY, 2000, 23 (02) : 123 - 136
  • [33] Combating deepfakes: a comprehensive multilayer deepfake video detection framework
    Nikhil Rathoure
    R. K. Pateriya
    Nitesh Bharot
    Priyanka Verma
    Multimedia Tools and Applications, 2024, 83 (38) : 85619 - 85636
  • [34] Learning efficient and interpretable prototypes from data for nearest neighbor classification method
    Ezghari, Soufiane
    Benouini, Rachid
    Zahi, Azeddine
    Zenkouar, Khalid
    2017 INTELLIGENT SYSTEMS AND COMPUTER VISION (ISCV), 2017,
  • [35] A Framework for Descriptive Epidemiology
    Lesko, Catherine R.
    Fox, Matthew P.
    Edwards, Jessie K.
    AMERICAN JOURNAL OF EPIDEMIOLOGY, 2022, 191 (12) : 2063 - 2070
  • [36] Detecting Audio Deepfakes: Integrating CNN and BiLSTM with Multi-Feature Concatenation
    Wani, Taiba Majid
    Qadri, Syed Asif Ahmad
    Comminiello, Danilo
    Amerini, Irene
    PROCEEDINGS OF THE 2024 ACM WORKSHOP ON INFORMATION HIDING AND MULTIMEDIA SECURITY, IH&MMSEC 2024, 2024, : 271 - 276
  • [37] An Interpretable Framework for Stock Trend Forecasting
    Wang, Lewen
    Ye, Zuoxian
    2020 3RD INTERNATIONAL CONFERENCE ON COMPUTER INFORMATION SCIENCE AND APPLICATION TECHNOLOGY (CISAT) 2020, 2020, 1634
  • [38] A framework for inherently interpretable optimization models
    Goerigk, Marc
    Hartisch, Michael
    EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2023, 310 (03) : 1312 - 1324
  • [39] NeSyFOLD: A Framework for Interpretable Image Classification
    Padalkar, Parth
    Wang, Huaduo
    Gupta, Gopal
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 5, 2024, : 4378 - 4387
  • [40] Human performance in detecting deepfakes: A systematic review and meta-analysis of 56 papers
    Diel, Alexander
    Lalgi, Tania
    Schroeter, Isabel Carolin
    Macdorman, Karl F.
    Teufel, Martin
    Baeuerle, Alexander
    COMPUTERS IN HUMAN BEHAVIOR REPORTS, 2024, 16