Sparse-Coding Variational Autoencoders

被引:0
|
作者
Geadah, Victor [1 ]
Barello, Gabriel [2 ]
Greenidge, Daniel [3 ]
Charles, Adam S. [4 ,5 ]
Pillow, Jonathan W. [6 ]
机构
[1] Princeton Univ, Program Appl & Computat Math, Princeton, NJ 08544 USA
[2] Univ Oregon, Inst Neurosci, Eugene, OR 97403 USA
[3] Princeton Univ, Dept Comp Sci, Princeton, NJ 08544 USA
[4] Dept Ctr Imaging Sci, Dept Biomed Engn, Baltimore, MD 21218 USA
[5] Dept Kavli Neurosci Discovery Inst, Baltimore, MD 21218 USA
[6] Princeton Univ, Princeton Neurosci Inst, Princeton, NJ 08544 USA
基金
加拿大自然科学与工程研究理事会;
关键词
NATURAL IMAGE STATISTICS; LEARNING SPARSE; THRESHOLDING ALGORITHM; RECEPTIVE-FIELDS; ARCHITECTURE; VARIABILITY; EMERGENCE; INFERENCE; NETWORKS; CODES;
D O I
10.1162/neco_a_01715
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The sparse coding model posits that the visual system has evolved to ef-ficiently code natural stimuli using a sparse set of features from an over-complete dictionary. The original sparse coding model suffered from twokey limitations; however: (1) computing the neural response to an imagepatch required minimizing a nonlinear objective function via recurrentdynamics and (2) fitting relied on approximate inference methods thatignored uncertainty. Although subsequent work has developed several methods to overcome these obstacles, we propose a novel solution in-spired by the variational autoencoder (VAE) framework. We introduce the sparse coding variational autoencoder (SVAE), which augments the sparse coding model with a probabilistic recognition model parameterized by a deep neural network. This recognition model provides a neu-rally plausible feedforward implementation for the mapping from image patches to neural activities and enables a principled method for fitting the sparse coding model to data via maximization of the evidence lower bound (ELBO). The SVAE differs from standard VAEs in three key respects: the latent representation is overcomplete (there are more latent dimensions than image pixels), the prior is sparse or heavy-tailed instead of gaussian, and the decoder network is a linear projection instead of a deepnetwork. We fit the SVAE to natural image data under different assumedprior distributions and show that it obtains higher test performance thanprevious fitting methods. Finally, we examine the response properties of the recognition network and show that it captures important nonlinear properties of neurons in the early visual pathway
引用
收藏
页码:2571 / 2601
页数:31
相关论文
共 50 条
  • [31] Comparing variance distribution in orthogonal and sparse-coding models of simple cell receptive fields in mammalian visual cortex
    Watters, PA
    Tolhurst, DJ
    JOURNAL OF PHYSIOLOGY-LONDON, 1998, 506P : 91P - 91P
  • [32] Diffusion Variational Autoencoders
    Rey, Luis A. Perez
    Menkovski, Vlado
    Portegies, Jim
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 2704 - 2710
  • [33] Tree Variational Autoencoders
    Manduchi, Laura
    Vandenhirtz, Moritz
    Ryser, Alain
    Vogt, Julia E.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [34] Overdispersed Variational Autoencoders
    Shah, Harshil
    Barber, David
    Botev, Aleksandar
    2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 1109 - 1116
  • [35] Ladder Variational Autoencoders
    Sonderby, Casper Kaae
    Raiko, Tapani
    Maaloe, Lars
    Sonderby, Soren Kaae
    Winther, Ole
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [36] Affine Variational Autoencoders
    Bidart, Rene
    Wong, Alexander
    IMAGE ANALYSIS AND RECOGNITION, ICIAR 2019, PT I, 2019, 11662 : 461 - 472
  • [37] Clockwork Variational Autoencoders
    Saxena, Vaibhav
    Ba, Jimmy
    Hafner, Danijar
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [38] Empirical Transition Probability Indexing Sparse-Coding Belief Propagation (ETPI-SCoBeP) Genome Sequence Alignment
    Roozgard, Aminmohammad
    Barzigar, Nafise
    Wang, Shuang
    Jiang, Xiaoqian
    Cheng, Samuel
    CANCER INFORMATICS, 2014, 13 : 159 - 165
  • [39] Lifelong Mixture of Variational Autoencoders
    Ye, Fei
    Bors, Adrian G.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (01) : 461 - 474
  • [40] Quality metrics of variational autoencoders
    Leontev, Mikhail
    Mikheev, Alexander
    Sviatov, Kirill
    Sukhov, Sergey
    2020 VI INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY AND NANOTECHNOLOGY (IEEE ITNT-2020), 2020,