Isometric Quotient Variational Auto-Encoders for Structure-Preserving Representation Learning

被引:0
|
作者
Huh, In [1 ]
Jeong, Changwook [2 ]
Choe, Jae Myung [1 ]
Kim, Young-Gu [1 ]
Kim, Dae Sin [1 ]
机构
[1] Samsung Elect, Innovat Ctr, CSE Team, Suwon, South Korea
[2] UNIST, Grad Sch Semicond Mat & Devices Engn, Ulsan, South Korea
基金
新加坡国家研究基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We study structure-preserving low-dimensional representation of a data manifold embedded in a high-dimensional observation space based on variational auto-encoders (VAEs). We approach this by decomposing the data manifold M as M= M/G x G, where G and M/G are a group of symmetry transformations and a quotient space ofMup to G, respectively. From this perspective, we define the structure-preserving representation of such a manifold as a latent space Z which is isometrically isomorphic (i.e., distance-preserving) to the quotient space M/G rather M (i.e., symmetry-preserving). To this end, we propose a novel auto-encoding framework, named isometric quotient VAEs (IQVAEs), that can extract the quotient space from observations and learn the Riemannian isometry of the extracted quotient in an unsupervised manner. Empirical proof-of-concept experiments reveal that the proposed method can find a meaningful representation of the learned data and outperform other competitors for downstream tasks.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] Semi-Supervised Representation Learning: Transfer Learning with Manifold Regularized Auto-encoders
    Zhu, Yi
    Hu, Xuegang
    Zhang, Yuhong
    Li, Peipei
    [J]. 2018 9TH IEEE INTERNATIONAL CONFERENCE ON BIG KNOWLEDGE (ICBK), 2018, : 83 - 90
  • [32] Semantic-Aware Auto-Encoders for Self-supervised Representation Learning
    Wang, Guangrun
    Tang, Yansong
    Lin, Liang
    Torr, Philip H. S.
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 9654 - 9665
  • [33] Multimodal protein representation learning and target-aware variational auto-encoders for protein-binding ligand generation
    Khang Ngo, Nhat
    Son Hy, Truong
    [J]. MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2024, 5 (02):
  • [34] Reconstructed Graph Constrained Auto-Encoders for Multi-View Representation Learning
    Gou, Jianping
    Xie, Nannan
    Yuan, Yunhao
    Du, Lan
    Ou, Weihua
    Yi, Zhang
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 1319 - 1332
  • [35] A hybrid learning model based on auto-encoders
    Zhou, Ju
    Ju, Li
    Zhang, Xiaolong
    [J]. PROCEEDINGS OF THE 2017 12TH IEEE CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA), 2017, : 522 - 528
  • [36] Stacked Convolutional Denoising Auto-Encoders for Feature Representation
    Du, Bo
    Xiong, Wei
    Wu, Jia
    Zhang, Lefei
    Zhang, Liangpei
    Tao, Dacheng
    [J]. IEEE TRANSACTIONS ON CYBERNETICS, 2017, 47 (04) : 1017 - 1027
  • [37] Why Regularized Auto-Encoders Learn Sparse Representation?
    Arpit, Devansh
    Zhou, Yingbo
    Ngo, Hung Q.
    Govindaraju, Venu
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 48, 2016, 48
  • [38] Interpretable and effective hashing via Bernoulli variational auto-encoders
    Mena, Francisco
    Nanculef, Ricardo
    Valle, Carlos
    [J]. INTELLIGENT DATA ANALYSIS, 2020, 24 (24) : S141 - S166
  • [39] Time-sequential variational conditional auto-encoders for recommendation
    Hozumi J.
    Iwasawa Y.
    Matsuo Y.
    [J]. 1600, Japanese Society for Artificial Intelligence (36):
  • [40] Ensemble kalman variational objective: a variational inference framework for sequential variational auto-encoders
    Ishizone, Tsuyoshi
    Higuchi, Tomoyuki
    Nakamura, Kazuyuki
    [J]. IEICE NONLINEAR THEORY AND ITS APPLICATIONS, 2023, 14 (04): : 691 - 717