Learning Low-Dimensional Representations of Shape Data Sets with Diffeomorphic Autoencoders

被引:5
|
作者
Bone, Alexandre [1 ]
Louis, Maxime [1 ]
Colliot, Olivier [1 ]
Durrleman, Stanley [1 ]
机构
[1] Sorbonne Univ, INRIA, ICM, ARAMIS Lab,Inserm,U1127,CNRS,UMR 7225, Paris, France
关键词
REGISTRATION; MORPHOMETRY; FRAMEWORK; SURFACE;
D O I
10.1007/978-3-030-20351-1_15
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Contemporary deformation-based morphometry offers parametric classes of diffeomorphisms that can be searched to compute the optimal transformation that warps a shape into another, thus defining a similarity metric for shape objects. Extending such classes to capture the geometrical variability in always more varied statistical situations represents an active research topic. This quest for genericity however leads to computationally-intensive estimation problems. Instead, we propose in this work to learn the best-adapted class of diffeomorphisms along with its parametrization, for a shape data set of interest. Optimization is carried out with an auto-encoding variational inference approach, offering in turn a coherent model-estimator pair that we name diffeomorphic auto-encoder. The main contributions are: (i) an original network-based method to construct diffeomorphisms, (ii) a current-splatting layer that allows neural network architectures to process meshes, (iii) illustrations on simulated and real data sets that show differences in the learned statistical distributions of shapes when compared to a standard approach.
引用
收藏
页码:195 / 207
页数:13
相关论文
共 50 条
  • [1] Learning Low-Dimensional Temporal Representations
    Su, Bing
    Wu, Ying
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [2] Learning to Predict Ischemic Stroke Growth on Acute CT Perfusion Data by Interpolating Low-Dimensional Shape Representations
    Lucas, Christian
    Kemmling, Andre
    Bouteidja, Nassim
    Aulmann, Linda F.
    Mamlouk, Amir Madany
    Heinrich, Mattias P.
    FRONTIERS IN NEUROLOGY, 2018, 9
  • [3] Incremental Construction of Low-Dimensional Data Representations
    Kuleshov, Alexander
    Bernstein, Alexander
    ARTIFICIAL NEURAL NETWORKS IN PATTERN RECOGNITION, 2016, 9896 : 55 - 67
  • [4] Learning Adaptive Multiscale Approximations to Data and Functions near Low-Dimensional Sets
    Liao, Wenjing
    Maggioni, Mauro
    Vigogna, Stefano
    2016 IEEE INFORMATION THEORY WORKSHOP (ITW), 2016,
  • [5] Learning Low-Dimensional Temporal Representations with Latent Alignments
    Su, Bing
    Wu, Ying
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2020, 42 (11) : 2842 - 2857
  • [6] SEQUENTIAL ACTIVE LEARNING OF LOW-DIMENSIONAL MODEL REPRESENTATIONS FOR RELIABILITY ANALYSIS
    Ehre, Max
    Papaioannou, Iason
    Sudret, Bruno
    Straub, Daniel
    SIAM JOURNAL ON SCIENTIFIC COMPUTING, 2022, 44 (03): : B558 - B584
  • [7] Learning Low-Dimensional Representation of Bivariate Histogram Data
    Vaiciukynas, Evaldas
    Ulicny, Matej
    Pashami, Sepideh
    Nowaczyk, Slawomir
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2018, 19 (11) : 3723 - 3735
  • [8] 3-D diffeomorphic shape registration on hippocampal data sets
    Guo, HY
    Rangarajan, A
    Joshi, SC
    MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION - MICCAI 2005, PT 2, 2005, 3750 : 984 - 991
  • [9] Convolutional Autoencoders and Clustering for Low-dimensional Parametrization of Incompressible Flows
    Heiland, Jan
    Kim, Yongho
    IFAC PAPERSONLINE, 2022, 55 (30): : 430 - 435
  • [10] Predictive learning as a network mechanism for extracting low-dimensional latent space representations
    Stefano Recanatesi
    Matthew Farrell
    Guillaume Lajoie
    Sophie Deneve
    Mattia Rigotti
    Eric Shea-Brown
    Nature Communications, 12