A coupled autoencoder approach for multi-modal analysis of cell types

被引:0
|
作者
Gala, Rohan [1 ]
Gouwens, Nathan [1 ]
Yao, Zizhen [1 ]
Budzillo, Agata [1 ]
Penn, Osnat [1 ]
Tasic, Bosiljka [1 ]
Murphy, Gabe [1 ]
Zeng, Hongkui [1 ]
Sumbul, Uygar [1 ]
机构
[1] Allen Inst, Seattle, WA 98109 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent developments in high throughput profiling of individual neurons have spurred data driven exploration of the idea that there exist natural groupings of neurons referred to as cell types. The promise of this idea is that the immense complexity of brain circuits can be reduced, and effectively studied by means of interactions between cell types. While clustering of neuron populations based on a particular data modality can be used to define cell types, such definitions are often inconsistent across different characterization modalities. We pose this issue of cross-modal alignment as an optimization problem and develop an approach based on coupled training of autoencoders as a framework for such analyses. We apply this framework to a Patch-seq dataset consisting of transcriptomic and electrophysiological profiles for the same set of neurons to study consistency of representations across modalities, and evaluate cross-modal data prediction ability. We explore the problem where only a subset of neurons is characterized with more than one modality, and demonstrate that representations learned by coupled autoencoders can be used to identify types sampled only by a single modality.
引用
收藏
页数:10
相关论文
共 50 条
  • [41] An efficient multi-modal sensors feature fusion approach for handwritten characters recognition using Shapley values and deep autoencoder
    Singh, Shashank Kumar
    Chaturvedi, Amrita
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 138
  • [42] MULTI-MODAL IMAGE PROCESSING BASED ON COUPLED DICTIONARY LEARNING
    Song, Pingfan
    Rodrigues, Miguel R. D.
    [J]. 2018 IEEE 19TH INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING ADVANCES IN WIRELESS COMMUNICATIONS (SPAWC), 2018, : 356 - 360
  • [43] Sounds, signs, and rapport: on the methodological importance of a multi-modal approach to discourse analysis
    Carroll, P
    Luchjenbroers, J
    Parker, S
    [J]. Multidisciplinary Approaches to Visual Representations and Interpretations, 2004, 2 : 165 - 180
  • [44] Flexible Dual Multi-Modal Hashing for Incomplete Multi-Modal Retrieval
    Wei, Yuhong
    An, Junfeng
    [J]. INTERNATIONAL JOURNAL OF IMAGE AND GRAPHICS, 2024,
  • [45] Multi-Modal Domain Adaptation Variational Autoencoder for EEG-Based Emotion Recognition
    Yixin Wang
    Shuang Qiu
    Dan Li
    Changde Du
    Bao-Liang Lu
    Huiguang He
    [J]. IEEE/CAA Journal of Automatica Sinica, 2022, 9 (09) : 1612 - 1626
  • [46] Multi-Modal Stacked Denoising Autoencoder for Handling Missing Data in Healthcare Big Data
    Kim, Joo-Chang
    Chung, Kyungyong
    [J]. IEEE ACCESS, 2020, 8 : 104933 - 104943
  • [47] Multi-Modal 2020: Multi-Modal Argumentation 30 Years Later
    Gilbert, Michael A.
    [J]. INFORMAL LOGIC, 2022, 42 (03): : 487 - 506
  • [48] Anytime 3D Object Reconstruction Using Multi-Modal Variational Autoencoder
    Yu, Hyeonwoo
    Oh, Jean
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (02) : 2162 - 2169
  • [49] BLR: A Multi-modal Sentiment Analysis Model
    Yang Yang
    Ye Zhonglin
    Zhao Haixing
    Li Gege
    Cao Shujuan
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PART X, 2023, 14263 : 466 - 478
  • [50] Multi-modal Analysis of Misleading Political News
    Shrestha, Anu
    Spezzano, Francesca
    Gurunathan, Indhumathi
    [J]. DISINFORMATION IN OPEN ONLINE MEDIA, MISDOOM 2020, 2020, 12259 : 261 - 276