Learning joint latent representations based on information maximization

被引:24
|
作者
Ye, Fei [1 ]
Bors, Adrian G. [1 ]
机构
[1] Univ York, Dept Comp Sci, York YO10 5GH, N Yorkshire, England
关键词
Disentangled learning; Variational Autoencoders (VAE); Generative Adversarial Nets (GAN); Representation learning; Mutual Information;
D O I
10.1016/j.ins.2021.03.007
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Learning disentangled and interpretable representations is an important aspect of informa-tion understanding. In this paper, we propose a novel deep learning model representing both discrete and continuous latent variable spaces which can be used in either supervised or unsupervised learning. The proposed model is trained using an optimization function employing the mutual information maximization criterion. For the unsupervised learning setting we define a lower bound to the mutual information between the joint distribution of the latent variables corresponding to the real data and those generated by the model. The maximization of this lower bound during the training induces the learning of disentan-gled and interpretable data representations. Such representations can be used for attribute manipulation and image editing tasks. (c) 2021 Elsevier Inc. All rights reserved.
引用
收藏
页码:216 / 236
页数:21
相关论文
共 50 条
  • [41] Learning motion patterns in unstructured scene based on latent structural information
    Liu, Weibin
    Chong, Xinyi
    Huang, Pengfei
    Badler, Norman I.
    JOURNAL OF VISUAL LANGUAGES AND COMPUTING, 2014, 25 (01): : 43 - 53
  • [42] Learning Hierarchical Features with Joint Latent Space Energy-Based Prior
    Cui, Jiali
    Wu, Ying Nian
    Han, Tian
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 2218 - 2227
  • [43] Multi-view Subspace Clustering via Joint Latent Representations
    Wenhua Dong
    Xiao-jun Wu
    Tianyang Xu
    Neural Processing Letters, 2022, 54 : 1879 - 1901
  • [44] Multi-view Subspace Clustering via Joint Latent Representations
    Dong, Wenhua
    Wu, Xiao-jun
    Xu, Tianyang
    NEURAL PROCESSING LETTERS, 2022, 54 (03) : 1879 - 1901
  • [45] Learning Speaker Representations with Mutual Information
    Ravanelli, Mirco
    Bengio, Yoshua
    INTERSPEECH 2019, 2019, : 1153 - 1157
  • [46] Learning Data Representations with Joint Diffusion Models
    Deja, Kamil
    Trzcinski, Tomasz
    Tomczak, Jakub M.
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT II, 2023, 14170 : 543 - 559
  • [47] Learning Disentangled Joint Continuous and Discrete Representations
    Dupont, Emilien
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [48] DeepArt: Learning Joint Representations of Visual Arts
    Mao, Hui
    Cheung, Ming
    She, James
    PROCEEDINGS OF THE 2017 ACM MULTIMEDIA CONFERENCE (MM'17), 2017, : 1183 - 1191
  • [49] Improving information-theoretic competitive learning by accentuated information maximization
    Kamimura, R
    INTERNATIONAL JOURNAL OF GENERAL SYSTEMS, 2005, 34 (03) : 219 - 233
  • [50] Joint Optimization of Manifold Learning and Sparse Representations
    Ptucha, Raymond
    Savakis, Andreas
    2013 10TH IEEE INTERNATIONAL CONFERENCE AND WORKSHOPS ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG), 2013,