Synthesizing the preferred inputs for neurons in neural networks via deep generator networks

被引:0
|
作者
Anh Nguyen
Dosovitskiy, Alexey
Yosinski, Jason
Brox, Thomas
Clune, Jeff
机构
基金
美国国家科学基金会; 欧洲研究理事会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks (DNNs) have demonstrated state-of-the-art results on many pattern recognition tasks, especially vision classification problems. Understanding the inner workings of such computational brains is both fascinating basic science that is interesting in its own right-similar to why we study the human brain-and will enable researchers to further improve DNNs. One path to understanding how a neural network functions internally is to study what each of its neurons has learned to detect. One such method is called activation maximization (AM), which synthesizes an input (e.g. an image) that highly activates a neuron. Here we dramatically improve the qualitative state of the art of activation maximization by harnessing a powerful, learned prior: a deep generator network (DGN). The algorithm (1) generates qualitatively state-of-the-art synthetic images that look almost real, (2) reveals the features learned by each neuron in an interpretable way, (3) generalizes well to new datasets and somewhat well to different network architectures without requiring the prior to be relearned, and (4) can be considered as a high-quality generative method (in this case, by generating novel, creative, interesting, recognizable images).
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Prioritizing Test Inputs for Deep Neural Networks via Mutation Analysis
    Wang, Zan
    You, Hanmo
    Chen, Junjie
    Zhang, Yingyi
    Dong, Xuyuan
    Zhang, Wenbin
    [J]. 2021 IEEE/ACM 43RD INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2021), 2021, : 397 - 409
  • [2] Synthesizing Game Audio Using Deep Neural Networks
    McDonagh, Aoife
    Lemley, Joseph
    Cassidy, Ryan
    Corcoran, Peter
    [J]. 2018 IEEE GAMES, ENTERTAINMENT, MEDIA CONFERENCE (GEM), 2018, : 312 - 315
  • [3] On Detection of Out of Distribution Inputs in Deep Neural Networks
    Jha, Susmit
    Roy, Anirban
    [J]. 2021 IEEE THIRD INTERNATIONAL CONFERENCE ON COGNITIVE MACHINE INTELLIGENCE (COGMI 2021), 2021, : 282 - 288
  • [4] Efficient generation of valid test inputs for deep neural networks via gradient search
    Jiang, Zhouxian
    Li, Honghui
    Wang, Rui
    [J]. JOURNAL OF SOFTWARE-EVOLUTION AND PROCESS, 2024, 36 (04)
  • [5] Synthesizing and Imitating Handwriting using Deep Recurrent Neural Networks and Mixture Density Networks
    Kumar, K. Manoj
    Kandala, Harish
    Reddy, N. Sudhakar
    [J]. 2018 9TH INTERNATIONAL CONFERENCE ON COMPUTING, COMMUNICATION AND NETWORKING TECHNOLOGIES (ICCCNT), 2018,
  • [6] Neural Networks with Dependent Inputs
    Mostafa Boskabadi
    Mahdi Doostparast
    [J]. Neural Processing Letters, 2023, 55 : 7337 - 7350
  • [7] Selective Hardening of Critical Neurons in Deep Neural Networks
    Ruospo, Annachiara
    Gavarini, Gabriele
    Bragaglia, Ilaria
    Traiola, Marcello
    Bosio, Alberto
    Sanchez, Ernesto
    [J]. 2022 25TH INTERNATIONAL SYMPOSIUM ON DESIGN AND DIAGNOSTICS OF ELECTRONIC CIRCUITS AND SYSTEMS (DDECS), 2022, : 136 - 141
  • [8] Neural Networks with Dependent Inputs
    Boskabadi, Mostafa
    Doostparast, Mahdi
    [J]. NEURAL PROCESSING LETTERS, 2023, 55 (06) : 7337 - 7350
  • [9] Single cortical neurons as deep artificial neural networks
    Beniaguev, David
    Segev, Idan
    London, Michael
    [J]. NEURON, 2021, 109 (17) : 2727 - +
  • [10] Load Forecasting via Deep Neural Networks
    He, Wan
    [J]. 5TH INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY AND QUANTITATIVE MANAGEMENT, ITQM 2017, 2017, 122 : 308 - 314