Sparse-Coding Variational Autoencoders

被引:0
|
作者
Geadah, Victor [1 ]
Barello, Gabriel [2 ]
Greenidge, Daniel [3 ]
Charles, Adam S. [4 ,5 ]
Pillow, Jonathan W. [6 ]
机构
[1] Princeton Univ, Program Appl & Computat Math, Princeton, NJ 08544 USA
[2] Univ Oregon, Inst Neurosci, Eugene, OR 97403 USA
[3] Princeton Univ, Dept Comp Sci, Princeton, NJ 08544 USA
[4] Dept Ctr Imaging Sci, Dept Biomed Engn, Baltimore, MD 21218 USA
[5] Dept Kavli Neurosci Discovery Inst, Baltimore, MD 21218 USA
[6] Princeton Univ, Princeton Neurosci Inst, Princeton, NJ 08544 USA
基金
加拿大自然科学与工程研究理事会;
关键词
NATURAL IMAGE STATISTICS; LEARNING SPARSE; THRESHOLDING ALGORITHM; RECEPTIVE-FIELDS; ARCHITECTURE; VARIABILITY; EMERGENCE; INFERENCE; NETWORKS; CODES;
D O I
10.1162/neco_a_01715
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The sparse coding model posits that the visual system has evolved to ef-ficiently code natural stimuli using a sparse set of features from an over-complete dictionary. The original sparse coding model suffered from twokey limitations; however: (1) computing the neural response to an imagepatch required minimizing a nonlinear objective function via recurrentdynamics and (2) fitting relied on approximate inference methods thatignored uncertainty. Although subsequent work has developed several methods to overcome these obstacles, we propose a novel solution in-spired by the variational autoencoder (VAE) framework. We introduce the sparse coding variational autoencoder (SVAE), which augments the sparse coding model with a probabilistic recognition model parameterized by a deep neural network. This recognition model provides a neu-rally plausible feedforward implementation for the mapping from image patches to neural activities and enables a principled method for fitting the sparse coding model to data via maximization of the evidence lower bound (ELBO). The SVAE differs from standard VAEs in three key respects: the latent representation is overcomplete (there are more latent dimensions than image pixels), the prior is sparse or heavy-tailed instead of gaussian, and the decoder network is a linear projection instead of a deepnetwork. We fit the SVAE to natural image data under different assumedprior distributions and show that it obtains higher test performance thanprevious fitting methods. Finally, we examine the response properties of the recognition network and show that it captures important nonlinear properties of neurons in the early visual pathway
引用
收藏
页码:2571 / 2601
页数:31
相关论文
共 50 条
  • [41] Variational Autoencoders for Assessing Sustainability
    Fernando Romero-Canizares, Jose
    Vicente-Galindo, Purificacion
    DOCTORAL SYMPOSIUM ON INFORMATION AND COMMUNICATION TECHNOLOGIES - DSICT, 2022, 846 : 47 - 62
  • [42] Efficient Evolution of Variational Autoencoders
    Hajewski, Jeff
    Oliveira, Suely
    2021 IEEE 11TH ANNUAL COMPUTING AND COMMUNICATION WORKSHOP AND CONFERENCE (CCWC), 2021, : 1541 - 1550
  • [43] Resampled Priors for Variational Autoencoders
    Bauer, Matthias
    Mnih, Andriy
    22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89 : 66 - 75
  • [44] Variational Graph Normalized AutoEncoders
    Ahn, Seong Jin
    Kim, MyoungHo
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 2827 - 2831
  • [45] Inference Suboptimality in Variational Autoencoders
    Cremer, Chris
    Li, Xuechen
    Duvenaud, David
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [46] Variational Autoencoders for Collaborative Filtering
    Liang, Dawen
    Krishnan, Rahul G.
    Hoffman, Matthew D.
    Jebara, Tony
    WEB CONFERENCE 2018: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW2018), 2018, : 689 - 698
  • [47] An Evolutionary Approach to Variational Autoencoders
    Hajewski, Jeff
    Oliveira, Suely
    2020 10TH ANNUAL COMPUTING AND COMMUNICATION WORKSHOP AND CONFERENCE (CCWC), 2020, : 71 - 77
  • [48] Disentangling Disentanglement in Variational Autoencoders
    Mathieu, Emile
    Rainforth, Tom
    Siddharth, N.
    Teh, Yee Whye
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [49] A Geometric Perspective on Variational Autoencoders
    Chadebec, Clement
    Allassonniere, Stephanie
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [50] Shedding Light on Variational Autoencoders
    Ruiz Vargas, J. C.
    Novaes, S. F.
    Cobe, R.
    Iope, R.
    Stanzani, S.
    Tomei, T. R.
    2018 XLIV LATIN AMERICAN COMPUTER CONFERENCE (CLEI 2018), 2018, : 294 - 298