Interpretable Deep Convolutional Neural Networks via Meta-learning

被引:0
|
作者
Liu, Xuan [1 ]
Wang, Xiaoguang [1 ,2 ]
Matwin, Stan [1 ,3 ]
机构
[1] Dalhousie Univ, Fac Comp Sci, Inst Big Data Analyt, Halifax, NS, Canada
[2] Alibaba Grp, Hangzhou, Peoples R China
[3] Polish Acad Sci, Inst Comp Sci, Warsaw, Poland
基金
加拿大自然科学与工程研究理事会;
关键词
interpretability; Meta-learning; deep learning; Convolutional Neural Network; TensorFlow; big data;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Model interpretability is a requirement in many applications in which crucial decisions are made by users relying on a model's outputs. The recent movement for "algorithmic fairness" also stipulates explainability, and therefore interpretability of learning models. And yet the most successful contemporary Machine Learning approaches, the Deep Neural Networks, produce models that are highly non-interpretable. We attempt to address this challenge by proposing a technique called CNN-INTE to interpret deep Convolutional Neural Networks (CNN) via meta-learning. In this work, we interpret a specific hidden layer of the deep CNN model on the MNIST image dataset. We use a clustering algorithm in a two-level structure to find the meta-level training data and Random Forest as base learning algorithms to generate the meta-level test data. The interpretation results are displayed visually via diagrams, which clearly indicates how a specific test instance is classified. Our method achieves global interpretation for all the test instances on the hidden layers without sacrificing the accuracy obtained by the original deep CNN model. This means our model is faithful to the original deep CNN model, which leads to reliable interpretations.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Training Noise-Robust Deep Neural Networks via Meta-Learning
    Wang, Zhen
    Hu, Guosheng
    Hu, Qinghua
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 4523 - 4532
  • [2] Deep learning for steganalysis via convolutional neural networks
    Qian, Yinlong
    Dong, Jing
    Wang, Wei
    Tan, Tieniu
    [J]. MEDIA WATERMARKING, SECURITY, AND FORENSICS 2015, 2015, 9409
  • [3] Meta-Learning in Neural Networks: A Survey
    Hospedales, Timothy
    Antoniou, Antreas
    Micaelli, Paul
    Storkey, Amos
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (09) : 5149 - 5169
  • [4] Meta-learning pseudo-differential operators with deep neural networks
    Feliu-Fabà, Jordi
    Fan, Yuwei
    Ying, Lexing
    [J]. Journal of Computational Physics, 2020, 408
  • [5] Supplementary Meta-Learning: Towards a Dynamic Model For Deep Neural Networks
    Zhang, Feihu
    Wah, Benjamin W.
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 4354 - 4363
  • [6] Hierarchical Meta-learning Models with Deep Neural Networks for Spectrum Assignment
    Rutagemwa, Humphrey
    Baddour, Kareem E.
    Rong, Bo
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS (ICC WORKSHOPS), 2019,
  • [7] Meta-learning pseudo-differential operators with deep neural networks
    Feliu-Faba, Jordi
    Fan, Yuwei
    Ying, Lexing
    [J]. JOURNAL OF COMPUTATIONAL PHYSICS, 2020, 408
  • [8] Hierarchical Meta-learning Models with Deep Neural Networks for Spectrum Assignment
    Rutagemwa, Humphrey
    Baddour, Kareem E.
    Rong, Bo
    [J]. 2019 IEEE PACIFIC RIM CONFERENCE ON COMMUNICATIONS, COMPUTERS AND SIGNAL PROCESSING (PACRIM), 2019,
  • [9] Interpretable convolutional neural networks via feedforward design
    Kuo, C-C. Jay
    Zhang, Min
    Li, Siyang
    Duan, Jiali
    Chen, Yueru
    [J]. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2019, 60 : 346 - 359
  • [10] Learning Deep Graph Representations via Convolutional Neural Networks
    Ye, Wei
    Askarisichani, Omid
    Jones, Alex
    Singh, Ambuj
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2022, 34 (05) : 2268 - 2279