Learning Generative Models for Active Inference Using Tensor Networks

被引:1
|
作者
Wauthier, Samuel T. [1 ]
Vanhecke, Bram [2 ,3 ]
Verbelen, Tim [1 ]
Dhoedt, Bart [1 ]
机构
[1] Ghent Univ IMEC, IDLab, Dept Informat Technol, Technol Pk Zwijnaarde 126, B-9052 Ghent, Belgium
[2] Univ Vienna, Fac Phys, Boltzmanngasse 5, A-1090 Vienna, Austria
[3] Univ Vienna, Fac Math Quantum Opt Quantum Nanophys & Quantum I, Boltzmanngasse 5, A-1090 Vienna, Austria
来源
ACTIVE INFERENCE, IWAI 2022 | 2023年 / 1721卷
关键词
Active inference; Tensor networks; Generative modeling;
D O I
10.1007/978-3-031-28719-0_20
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Active inference provides a general framework for behavior and learning in autonomous agents. It states that an agent will attempt to minimize its variational free energy, defined in terms of beliefs over observations, internal states and policies. Traditionally, every aspect of a discrete active inference model must be specified by hand, i.e. by manually defining the hidden state space structure, as well as the required distributions such as likelihood and transition probabilities. Recently, efforts have been made to learn state space representations automatically from observations using deep neural networks. In this paper, we present a novel approach of learning state spaces using quantum physics-inspired tensor networks. The ability of tensor networks to represent the probabilistic nature of quantum states as well as to reduce large state spaces makes tensor networks a natural candidate for active inference. We show how tensor networks can be used as a generative model for sequential data. Furthermore, we show how one can obtain beliefs from such a generative model and how an active inference agent can use these to compute the expected free energy. Finally, we demonstrate our method on the classic T-maze environment.
引用
收藏
页码:285 / 297
页数:13
相关论文
共 50 条
  • [41] Learning to Distort Images Using Generative Adversarial Networks
    Chen, Li-Heng
    Bampis, Christos G.
    Li, Zhi
    Bovik, Alan C.
    IEEE SIGNAL PROCESSING LETTERS, 2020, 27 : 2144 - 2148
  • [42] Generative Models for Active Vision
    Parr, Thomas
    Sajid, Noor
    Da Costa, Lancelot
    Mirza, M. Berk
    Friston, Karl J.
    FRONTIERS IN NEUROROBOTICS, 2021, 15
  • [43] Learning Generative Models Using Denoising Density Estimators
    Bigdeli, Siavash A.
    Lin, Geng
    Dunbar, L. Andrea
    Portenier, Tiziano
    Zwicker, Matthias
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, : 1 - 12
  • [44] UNSUPERVISED LEARNING OF MOTION PATTERNS USING GENERATIVE MODELS
    Nascimento, Jacinto C.
    Figueiredo, Mario A. T.
    Marques, Jorge S.
    2008 15TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1-5, 2008, : 761 - 764
  • [45] Asymptotically exact inference in differentiable generative models
    Graham, Matthew M.
    Storkey, Amos J.
    ELECTRONIC JOURNAL OF STATISTICS, 2017, 11 (02): : 5105 - 5164
  • [46] Combining Generative and Discriminative Models for Hybrid Inference
    Satorras, Victor Garcia
    Akata, Zeynep
    Welling, Max
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [47] Asymptotically exact inference in differentiable generative models
    Graham, Matthew M.
    Storkey, Amos J.
    ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 54, 2017, 54 : 499 - 508
  • [48] The Anatomy of Inference: Generative Models and Brain Structure
    Parr, Thomas
    Friston, Karl J.
    FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2018, 12
  • [49] Active inference and learning
    Friston, Karl
    FitzGerald, Thomas
    Rigoli, Francesco
    Schwartenbeck, Philipp
    O'Doherty, John
    Pezzulo, Giovanni
    NEUROSCIENCE AND BIOBEHAVIORAL REVIEWS, 2016, 68 : 862 - 879
  • [50] Automatic speech processing by inference in generative models
    Roweis, ST
    SPEECH SEPARATION BY HUMANS AND MACHINES, 2005, : 97 - 133