A coupled autoencoder approach for multi-modal analysis of cell types

被引:0
|
作者
Gala, Rohan [1 ]
Gouwens, Nathan [1 ]
Yao, Zizhen [1 ]
Budzillo, Agata [1 ]
Penn, Osnat [1 ]
Tasic, Bosiljka [1 ]
Murphy, Gabe [1 ]
Zeng, Hongkui [1 ]
Sumbul, Uygar [1 ]
机构
[1] Allen Inst, Seattle, WA 98109 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent developments in high throughput profiling of individual neurons have spurred data driven exploration of the idea that there exist natural groupings of neurons referred to as cell types. The promise of this idea is that the immense complexity of brain circuits can be reduced, and effectively studied by means of interactions between cell types. While clustering of neuron populations based on a particular data modality can be used to define cell types, such definitions are often inconsistent across different characterization modalities. We pose this issue of cross-modal alignment as an optimization problem and develop an approach based on coupled training of autoencoders as a framework for such analyses. We apply this framework to a Patch-seq dataset consisting of transcriptomic and electrophysiological profiles for the same set of neurons to study consistency of representations across modalities, and evaluate cross-modal data prediction ability. We explore the problem where only a subset of neurons is characterized with more than one modality, and demonstrate that representations learned by coupled autoencoders can be used to identify types sampled only by a single modality.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Multi-modal semantic autoencoder for cross-modal retrieval
    Wu, Yiling
    Wang, Shuhui
    Huang, Qingming
    [J]. NEUROCOMPUTING, 2019, 331 : 165 - 175
  • [2] Loosely-coupled approach towards multi-modal browsing
    Jan Kleindienst
    Ladislav Seredi
    Pekka Kapanen
    Janne Bergman
    [J]. Universal Access in the Information Society, 2003, 2 (2) : 173 - 188
  • [3] Jointly Trained Variational Autoencoder for Multi-Modal Sensor Fusion
    Korthals, Timo
    Hesse, Marc
    Leitner, Juergen
    Melnik, Andrew
    Rueckert, Ulrich
    [J]. 2019 22ND INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION 2019), 2019,
  • [4] Multi-modal propagation in coupled periodic systems
    Mencik, Jean-Mathieu
    Ichchou, Mohamed
    Jezequel, Louis
    [J]. EUROPEAN JOURNAL OF COMPUTATIONAL MECHANICS, 2006, 15 (1-3): : 293 - 305
  • [5] MULTI-MODAL APPROACH TO INDEXING AND CLASSIFICATION
    SWIFT, DF
    WINN, VA
    BRAMER, DA
    [J]. INTERNATIONAL CLASSIFICATION, 1977, 4 (02): : 90 - 94
  • [6] Multi-modal Approach for Affective Computing
    Siddharth
    Jung, Tzyy-Ping
    Sejnowski, Terrence J.
    [J]. 2018 40TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC), 2018, : 291 - 294
  • [7] A Hybrid Multi-Modal Approach For Flocking
    Lodge, Riley
    Zamani, Mohammad
    Marsh, Luke
    Sims, Brendan
    Hunjet, Robert
    [J]. 2019 12TH ASIAN CONTROL CONFERENCE (ASCC), 2019, : 126 - 131
  • [8] A Multi-modal Approach for Highway Assessment
    Mailer, Markus
    [J]. INTERNATIONAL SYMPOSIUM ON ENHANCING HIGHWAY PERFORMANCE (ISEHP), (7TH INTERNATIONAL SYMPOSIUM ON HIGHWAY CAPACITY AND QUALITY OF SERVICE, 3RD INTERNATIONAL SYMPOSIUM ON FREEWAY AND TOLLWAY OPERATIONS), 2016, 15 : 113 - 121
  • [9] A novel transformer autoencoder for multi-modal emotion recognition with incomplete data
    Cheng, Cheng
    Liu, Wenzhe
    Fan, Zhaoxin
    Feng, Lin
    Jia, Ziyu
    [J]. Neural Networks, 2024, 172
  • [10] Multi-Modal Non-Euclidean Brain Network Analysis With Community Detection and Convolutional Autoencoder
    Zhu, Qi
    Yang, Jing
    Wang, Shuihua
    Zhang, Daoqiang
    Zhang, Zheng
    [J]. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2023, 7 (02): : 436 - 446