Synthesizing the preferred inputs for neurons in neural networks via deep generator networks

被引:0
|
作者
Anh Nguyen
Dosovitskiy, Alexey
Yosinski, Jason
Brox, Thomas
Clune, Jeff
机构
基金
美国国家科学基金会; 欧洲研究理事会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks (DNNs) have demonstrated state-of-the-art results on many pattern recognition tasks, especially vision classification problems. Understanding the inner workings of such computational brains is both fascinating basic science that is interesting in its own right-similar to why we study the human brain-and will enable researchers to further improve DNNs. One path to understanding how a neural network functions internally is to study what each of its neurons has learned to detect. One such method is called activation maximization (AM), which synthesizes an input (e.g. an image) that highly activates a neuron. Here we dramatically improve the qualitative state of the art of activation maximization by harnessing a powerful, learned prior: a deep generator network (DGN). The algorithm (1) generates qualitatively state-of-the-art synthetic images that look almost real, (2) reveals the features learned by each neuron in an interpretable way, (3) generalizes well to new datasets and somewhat well to different network architectures without requiring the prior to be relearned, and (4) can be considered as a high-quality generative method (in this case, by generating novel, creative, interesting, recognizable images).
引用
收藏
页数:9
相关论文
共 50 条
  • [31] ON A UNIFIED SYNTHESIZING APPROACH FOR CELLULAR NEURAL NETWORKS
    HO, CY
    MORI, S
    [J]. IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 1994, E77D (04) : 433 - 442
  • [32] Synthesizing Chest X-Ray Pathology for Training Deep Convolutional Neural Networks
    Salehinejad, Hojjat
    Colak, Errol
    Dowdell, Tim
    Barfett, Joseph
    Valaee, Shahrokh
    [J]. IEEE TRANSACTIONS ON MEDICAL IMAGING, 2019, 38 (05) : 1197 - 1206
  • [33] Squeezing Correlated Neurons for Resource-Efficient Deep Neural Networks
    Ozen, Elbruz
    Orailoglu, Alex
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2020, PT II, 2021, 12458 : 52 - 68
  • [34] Implementation of deep neural networks to count dopamine neurons in substantia nigra
    Penttinen, Anna-Maija
    Parkkinen, Ilmari
    Blom, Sami
    Kopra, Jaakko
    Andressoo, Jaan-Olle
    Pitkanen, Kari
    Voutilainen, Merja H.
    Saarma, Mart
    Airavaara, Mikko
    [J]. EUROPEAN JOURNAL OF NEUROSCIENCE, 2018, 48 (06) : 2354 - 2361
  • [35] Maxout neurons for deep convolutional and LSTM neural networks in speech recognition
    Cai, Meng
    Liu, Jia
    [J]. SPEECH COMMUNICATION, 2016, 77 : 53 - 64
  • [36] The Effect of Combinatorial Coverage for Neurons on Fault Detection in Deep Neural Networks
    Wang, Ziyuan
    Guo, Jinwu
    Chen, Yanshan
    She, Feiyan
    [J]. 2021 21ST INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY COMPANION (QRS-C 2021), 2021, : 77 - 82
  • [37] A Novel Method to Fix Numbers of Hidden Neurons in Deep Neural Networks
    Li, Jiqian
    Wu, Yan
    Zhang, Junming
    Zhao, Guodong
    [J]. 2015 8TH INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN (ISCID), VOL 2, 2015, : 523 - 526
  • [38] BagNet: Berkeley Analog Generator with Layout Optimizer Boosted with Deep Neural Networks
    Hakhamaneshi, Kourosh
    Werblun, Nick
    Abbeel, Pieter
    Stojanovic, Vladimir
    [J]. 2019 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN (ICCAD), 2019,
  • [39] Scheduling Inputs in Early Exit Neural Networks
    Casale, Giuliano
    Roveri, Manuel
    [J]. IEEE TRANSACTIONS ON COMPUTERS, 2024, 73 (02) : 451 - 465
  • [40] DeepAbstraction: 2-Level Prioritization for Unlabeled Test Inputs in Deep Neural Networks
    Al-Qadasi, Hamzah
    Wu, Changshun
    Falcone, Ylie
    Bensalem, Saddek
    [J]. 2022 FOURTH IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE TESTING (AITEST 2022), 2022, : 64 - 71