DNNViz: Training Evolution Visualization for Deep Neural Networks

被引:1
|
作者
Clavien, Gil [1 ]
Alberti, Michele [1 ]
Pondenkandath, Vinaychandran [1 ]
Ingold, Rolf [1 ]
Liwicki, Marcus [1 ,2 ]
机构
[1] Univ Fribourg, Document Image & Voice Anal Grp DIVA, Fribourg, Switzerland
[2] Lulea Univ Technol, Machine Learning Grp, Lulea, Sweden
基金
瑞士国家科学基金会;
关键词
D O I
10.1109/SDS.2019.00-13
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we present novel visualization strategies for inspecting, displaying, browsing, comparing, and visualizing deep neural networks (DNN) and their internal state during training. Despite their broad use across many fields of application, deep learning techniques are still often referred to as "black boxes". Trying to get a better understanding of these models and how they work is a thriving field of research. To this end, we contribute with a visualization mechanism designed explicitly to enable simple and efficient introspection for deep neural networks. The mechanism processes, computes, and displays neurons activation during the training of a deep neural network. We furthermore demonstrate the usefulness of this visualization technique through different use cases: class similarity detection, hints for network pruning and adversarial attack detection. We implemented this mechanism in an open source tool called DNNViz, which is integrated into DeepDIVA, a highly-functional PyTorch framework for reproducible experiments.
引用
收藏
页码:19 / 24
页数:6
相关论文
共 50 条
  • [1] Visualization in Deep Neural Network Training
    Kollias, Stefanos
    [J]. INTERNATIONAL JOURNAL ON ARTIFICIAL INTELLIGENCE TOOLS, 2022, 31 (03)
  • [2] MULTILINGUAL TRAINING OF DEEP NEURAL NETWORKS
    Ghoshal, Arnab
    Swietojanski, Pawel
    Renals, Steve
    [J]. 2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, : 7319 - 7323
  • [3] Training deep quantum neural networks
    Kerstin Beer
    Dmytro Bondarenko
    Terry Farrelly
    Tobias J. Osborne
    Robert Salzmann
    Daniel Scheiermann
    Ramona Wolf
    [J]. Nature Communications, 11
  • [4] NOISY TRAINING FOR DEEP NEURAL NETWORKS
    Meng, Xiangtao
    Liu, Chao
    Zhang, Zhiyong
    Wang, Dong
    [J]. 2014 IEEE CHINA SUMMIT & INTERNATIONAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (CHINASIP), 2014, : 16 - 20
  • [5] Training deep quantum neural networks
    Beer, Kerstin
    Bondarenko, Dmytro
    Farrelly, Terry
    Osborne, Tobias J.
    Salzmann, Robert
    Scheiermann, Daniel
    Wolf, Ramona
    [J]. NATURE COMMUNICATIONS, 2020, 11 (01)
  • [6] GradVis: Visualization and Second Order Analysis of Optimization Surfaces during the Training of Deep Neural Networks
    Chatzimichailidis, Avraam
    Pfreundt, Franz-Josef
    Gauger, Nicolas R.
    Keuper, Janis
    [J]. PROCEEDINGS OF 2019 5TH IEEE/ACM WORKSHOP ON MACHINE LEARNING IN HIGH PERFORMANCE COMPUTING ENVIRONMENTS (MLHPC 2019), 2019, : 66 - 74
  • [7] Memetic Evolution of Deep Neural Networks
    Lorenzo, Pablo Ribalta
    Nalepa, Jakub
    [J]. GECCO'18: PROCEEDINGS OF THE 2018 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, 2018, : 505 - 512
  • [8] Visualization System for Evolutionary Neural Networks for Deep Learning
    Chae, Junghoon
    Schuman, Catherine D.
    Young, Steven R.
    Johnston, J. Travis
    Rose, Derek C.
    Patton, Robert M.
    Potok, Thomas E.
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, : 4498 - 4502
  • [9] An Interactive Visualization for Feature Localization in Deep Neural Networks
    Zurowietz, Martin
    Nattkemper, Tim W.
    [J]. FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2020, 3
  • [10] Weight Evolution: Improving Deep Neural Networks Training through Evolving InferiorWeight Values
    Lin, Zhenquan
    Guo, Kailing
    Xing, Xiaofen
    Xu, Xiangmin
    [J]. PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 2176 - 2184