Learning generative models of natural images

被引:10
|
作者
Wu, JM [1 ]
Lin, ZH [1 ]
机构
[1] Natl Donghwa Univ, Dept Appl Math, Hualien, Taiwan
关键词
neural networks; cortical maps; elastic net; Potts model; self-organization; unsupervised learning; natural images;
D O I
10.1016/S0893-6080(02)00018-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This work proposes an unsupervised learning process for analysis of natural images. The derivation is based on a generative model, a stochastic coin-flip process directly operating on many disjoint multivariate Gaussian distributions. Following the maximal likelihood principle and using the Potts encoding, the goodness-of-fit of the generative model to tremendous patches randomly sampled from natural images is quantitatively expressed by an objective function subject to a set of constraints. By further combination of the objective function and the minimal wiring criterion, we achieve a mixed integer and linear programming. A hybrid of the mean field annealing and the gradient descent method is applied to the mathematical framework and produces three sets of interactive dynamics for the learning process. Numerical simulations show that the learning process is effective for extraction of orientation, localization and bandpass features and the generative model can make an ensemble of a sparse code for natural images. (C) 2002 Elsevier Science Ltd. All rights reserved.
引用
收藏
页码:337 / 347
页数:11
相关论文
共 50 条
  • [41] Learning Universal Adversarial Perturbations with Generative Models
    Hayes, Jamie
    Danezis, George
    [J]. 2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (SPW 2018), 2018, : 43 - 49
  • [42] Representation Disentanglement in Generative Models with Contrastive Learning
    Mo, Shentong
    Sun, Zhun
    Li, Chao
    [J]. 2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 1531 - 1540
  • [43] Learning Deep Generative Models for Queuing Systems
    Ojeda, Cesar
    Cvejoski, Kostadin
    Georgiev, Bodgan
    Bauckhage, Christian
    Schuecker, Jannis
    Sanchez, Ramses J.
    [J]. THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 9214 - 9222
  • [44] Learning Conditional Generative Models for Phase Retrieval
    Uelwer, Tobias
    Konietzny, Sebastian
    Oberstrass, Alexander
    Harmeling, Stefan
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2023, 24
  • [45] Learning Intuitive Physics with Multimodal Generative Models
    Rezaei-Shoshtari, Sahand
    Hogan, Francois R.
    Jenkin, Michael
    Meger, David
    Dudek, Gregory
    [J]. THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 6110 - 6118
  • [46] Wasserstein of Wasserstein Loss for Learning Generative Models
    Dukler, Yonatan
    Li, Wuchen
    Lin, Alex Tong
    Montufar, Guido
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [47] The neural coding framework for learning generative models
    Ororbia, Alexander
    Kifer, Daniel
    [J]. NATURE COMMUNICATIONS, 2022, 13 (01)
  • [48] Learning Generative Models across Incomparable Spaces
    Bunne, Charlotte
    Alvarez-Melis, David
    Krause, Andreas
    Jegelka, Stefanie
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [49] Learning Generative Models for Rendering Specular Microgeometry
    Kuznetsov, Alexandr
    Hasan, Milos
    Xu, Zexiang
    Yan, Ling-Qi
    Walter, Bruce
    Kalantari, Nima Khademi
    Marschner, Steve
    Ramamoorthi, Ravi
    [J]. ACM TRANSACTIONS ON GRAPHICS, 2019, 38 (06):
  • [50] Conditional generative models for learning stochastic processes
    Salvatore Certo
    Anh Pham
    Nicolas Robles
    Andrew Vlasic
    [J]. Quantum Machine Intelligence, 2023, 5