Exploring Internal Representations of Deep Neural Networks

被引:0
|
作者
Despraz, Jeremie [1 ,2 ]
Gomez, Stephane [1 ,2 ]
Satizabal, Hector F. [1 ]
Pena-Reyes, Carlos Andres [1 ,2 ]
机构
[1] Univ Appl Sci Western Switzerland HES So, Sch Business & Engn Vaud HEIG VD, Yverdon, Switzerland
[2] SIB Swiss Inst Bioinformat, Computat Intelligence Computat Biol CI4CB, Lausanne, Switzerland
来源
关键词
Deep-learning; Convolutional neural networks; Autoencoders; Generative neural networks; Activation maximization; Interpretability;
D O I
10.1007/978-3-030-16469-0_7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper introduces a method for the generation of images that activate any target neuron or group of neurons of a trained convolutional neural network (CNN). These images are created in such a way that they contain attributes of natural images such as color patterns or textures. The main idea of the method is to pre-train a deep generative network on a dataset of natural images and then use this network to generate images for the target CNN. The analysis of the generated images allows for a better understanding of the CNN internal representations, the detection of otherwise unseen biases, or the creation of explanations through feature localization and description.
引用
收藏
页码:119 / 138
页数:20
相关论文
共 50 条
  • [1] ON THE SIGNIFICANCE OF INTERNAL REPRESENTATIONS IN NEURAL NETWORKS
    KOHONEN, T
    [J]. FIRST IEE INTERNATIONAL CONFERENCE ON ARTIFICIAL NEURAL NETWORKS, 1989, : 1 - 1
  • [2] Deep Neural Networks for Learning Graph Representations
    Cao, Shaosheng
    Lu, Wei
    Xu, Qiongkai
    [J]. THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 1145 - 1152
  • [3] Deep Networks as Paths on the Manifold of Neural Representations
    Lange, Richard D.
    Kwok, Devin
    Matelsky, Jordan
    Wang, Xinyue
    Rolnick, David
    Kording, Konrad P.
    [J]. TOPOLOGICAL, ALGEBRAIC AND GEOMETRIC LEARNING WORKSHOPS 2023, VOL 221, 2023, 221
  • [4] Convexity, internal representations and the statistical mechanics of neural networks
    Opper, M
    Kuhlmann, P
    Mietzner, A
    [J]. EUROPHYSICS LETTERS, 1997, 37 (01): : 31 - 36
  • [5] Exploring deep neural networks for rumor detection
    Muhammad Zubair Asghar
    Ammara Habib
    Anam Habib
    Adil Khan
    Rehman Ali
    Asad Khattak
    [J]. Journal of Ambient Intelligence and Humanized Computing, 2021, 12 : 4315 - 4333
  • [6] Exploring deep neural networks for rumor detection
    Asghar, Muhammad Zubair
    Habib, Ammara
    Habib, Anam
    Khan, Adil
    Ali, Rehman
    Khattak, Asad
    [J]. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING, 2021, 12 (04) : 4315 - 4333
  • [7] Exploring weight symmetry in deep neural networks
    Hu, Shell Xu
    Zagoruyko, Sergey
    Komodakis, Nikos
    [J]. COMPUTER VISION AND IMAGE UNDERSTANDING, 2019, 187
  • [8] Exploring Strategies for Training Deep Neural Networks
    Larochelle, Hugo
    Bengio, Yoshua
    Louradour, Jerome
    Lamblin, Pascal
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2009, 10 : 1 - 40
  • [9] Exploring strategies for training deep neural networks
    Larochelle, Hugo
    Bengio, Yoshua
    Louradour, Jérôme
    Lamblin, Pascal
    [J]. Journal of Machine Learning Research, 2009, 10 : 1 - 40
  • [10] Face Space Representations in Deep Convolutional Neural Networks
    O'Toole, Alice J.
    Castillo, Carlos D.
    Parde, Connor J.
    Hill, Matthew Q.
    Chellappa, Rama
    [J]. TRENDS IN COGNITIVE SCIENCES, 2018, 22 (09) : 794 - 809