Inter-model interpretability: Self-supervised models as a case study

被引:0
|
作者
Mustapha, Ahmad [1 ]
Khreich, Wael [2 ]
Masri, Wes [1 ]
机构
[1] Amer Univ Beirut AUB, Elect & Comp Engn, POB 11-0236, Riad El Solh 11072020, Beirut, Lebanon
[2] Amer Univ Beirut AUB, Suliman S Olayan Sch Business OSB, POB 11-0236, Riad El Solh 11072020, Beirut, Lebanon
关键词
Interpretability; Self supervised; Deep learning;
D O I
10.1016/j.array.2024.100350
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Since early machine learning models, metrics such as accuracy and precision have been the de facto way to evaluate and compare trained models. However, a single metric number does not fully capture model similarities and differences, especially in the computer vision domain. A model with high accuracy on a certain dataset might provide a lower accuracy on another dataset without further insights. To address this problem, we build on a recent interpretability technique called Dissect to introduce inter -model interpretability , which determines how models relate or complement each other based on the visual concepts they have learned (such as objects and materials). Toward this goal, we project 13 top-performing self-supervised models into a Learned Concepts Embedding (LCE) space that reveals proximities among models from the perspective of learned concepts. We further crossed this information with the performance of these models on four computer vision tasks and 15 datasets. The experiment allowed us to categorize the models into three categories and revealed the type of visual concepts different tasks required for the first time. This is a step forward for designing cross-task learning algorithms.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Differentiable self-supervised clustering with intrinsic interpretability
    Yan, Xiaoqiang
    Jin, Zhixiang
    Mao, Yiqiao
    Ye, Yangdong
    Yu, Hui
    [J]. NEURAL NETWORKS, 2024, 179
  • [2] A self-supervised anomaly detection algorithm with interpretability
    Wu, Zhichao
    Yang, Xin
    Wei, Xiaopeng
    Yuan, Peijun
    Zhang, Yuanping
    Bai, Jianming
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2024, 237
  • [3] Self-Supervised Learning for Seismic Data: Enhancing Model Interpretability With Seismic Attributes
    Salazar, Jose Julian
    Maldonado-Cruz, Eduardo
    Ochoa, Jesus
    Pyrcz, Michael J.
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [4] Towards Better Domain Adaptation for Self-Supervised Models: A Case Study of Child ASR
    Fan, Ruchao
    Zhu, Yunzheng
    Wang, Jinhan
    Alwan, Abeer
    [J]. IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2022, 16 (06) : 1242 - 1252
  • [5] Probing self-supervised speech models for phonetic and phonemic information: a case study in aspiration
    Martin, Kinan
    Gauthier, Jon
    Breiss, Canaan
    Levy, Roger
    [J]. INTERSPEECH 2023, 2023, : 251 - 255
  • [6] Interpretability in Sentiment Analysis: A Self-Supervised Approach to Sentiment Cue Extraction
    Sun, Yawei
    He, Saike
    Han, Xu
    Luo, Yan
    [J]. APPLIED SCIENCES-BASEL, 2024, 14 (07):
  • [7] Self-Supervised Models are Continual Learners
    Fini, Enrico
    da Costa, Victor G. Turrisi
    Alameda-Pineda, Xavier
    Ricci, Elisa
    Alahari, Karteek
    Mairal, Julien
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 9611 - 9620
  • [8] Dataset Inference for Self-Supervised Models
    Dziedzic, Adam
    Duan, Haonan
    Kaleem, Muhammad Ahmad
    Dhawan, Nikita
    Guan, Jonas
    Cattan, Yannis
    Boenisch, Franziska
    Papernot, Nicolas
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [9] SELF-SUPERVISED LEARNING-MODEL
    SAGA, K
    SUGASAKA, T
    SEKIGUCHI, M
    [J]. FUJITSU SCIENTIFIC & TECHNICAL JOURNAL, 1993, 29 (03): : 209 - 216
  • [10] ON COMPRESSING SEQUENCES FOR SELF-SUPERVISED SPEECH MODELS
    Meng, Yen
    Chen, Hsuan-Jui
    Shi, Jiatong
    Watanabe, Shinji
    Garcia, Paola
    Lee, Hung-yi
    Tang, Hao
    [J]. 2022 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP, SLT, 2022, : 1128 - 1135