Integrating Multimodal and Longitudinal Neuroimaging Data with Multi-Source Network Representation Learning

被引:1
|
作者
Zhang, Wen [1 ]
Braden, B. Blair [2 ]
Miranda, Gustavo [1 ]
Shu, Kai [3 ]
Wang, Suhang [4 ]
Liu, Huan [1 ]
Wang, Yalin [1 ]
机构
[1] Arizona State Univ, Sch Comp Informat & Decis Syst Engn, POB 878809, Tempe, AZ 85287 USA
[2] Arizona State Univ, Coll Hlth Solut, Tempe, AZ USA
[3] IIT, Dept Comp Sci, 10 W 31st St Room 226D, Chicago, IL 60616 USA
[4] Penn State Univ, Coll Informat Sci & Technol, E397 Westgate Bldg, University Pk, PA 16802 USA
基金
美国国家卫生研究院;
关键词
Multimodality; Longitudinal; Brain network fusion; Representation; FUNCTIONAL CONNECTIVITY; BRAIN; ANXIETY; DEPRESSION; FMRI;
D O I
10.1007/s12021-021-09523-w
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Uncovering the complex network of the brain is of great interest to the field of neuroimaging. Mining from these rich datasets, scientists try to unveil the fundamental biological mechanisms in the human brain. However, neuroimaging data collected for constructing brain networks is generally costly, and thus extracting useful information from a limited sample size of brain networks is demanding. Currently, there are two common trends in neuroimaging data collection that could be exploited to gain more information: 1) multimodal data, and 2) longitudinal data. It has been shown that these two types of data provide complementary information. Nonetheless, it is challenging to learn brain network representations that can simultaneously capture network properties from multimodal as well as longitudinal datasets. Here we propose a general fusion framework for multi-source learning of brain networks - multimodal brain network fusion with longitudinal coupling (MMLC). In our framework, three layers of information are considered, including cross-sectional similarity, multimodal coupling, and longitudinal consistency. Specifically, we jointly factorize multimodal networks and construct a rotation-based constraint to couple network variance across time. We also adopt the consensus factorization as the group consistent pattern. Using two publicly available brain imaging datasets, we demonstrate that MMLC may better predict psychometric scores than some other state-of-the-art brain network representation learning algorithms. Additionally, the discovered significant brain regions are synergistic with previous literature. Our new approach may boost statistical power and sheds new light on neuroimaging network biomarkers for future psychometric prediction research by integrating longitudinal and multimodal neuroimaging data.
引用
收藏
页码:301 / 316
页数:16
相关论文
共 50 条
  • [1] Integrating Multimodal and Longitudinal Neuroimaging Data with Multi-Source Network Representation Learning
    Wen Zhang
    B. Blair Braden
    Gustavo Miranda
    Kai Shu
    Suhang Wang
    Huan Liu
    Yalin Wang
    Neuroinformatics, 2022, 20 : 301 - 316
  • [2] Medical Concept Representation Learning from Multi-source Data
    Bai, Tian
    Egleston, Brian L.
    Bleicher, Richard
    Vucetic, Slobodan
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 4897 - 4903
  • [3] Exploiting Multi-source Data for Adversarial Driving Style Representation Learning
    Liu, Zhidan
    Zheng, Junhong
    Gong, Zengyang
    Zhang, Haodi
    Wu, Kaishun
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS (DASFAA 2021), PT I, 2021, 12681 : 491 - 508
  • [4] Multi-source Multimodal Data and Deep Learning for Disaster Response: A Systematic Review
    Nilani Algiriyage
    Raj Prasanna
    Kristin Stock
    Emma E. H. Doyle
    David Johnston
    SN Computer Science, 2022, 3 (1)
  • [5] Multi-source feature learning for joint analysis of incomplete multiple heterogeneous neuroimaging data
    Yuan, Lei
    Wang, Yalin
    Thompson, Paul M.
    Narayan, Vaibhav A.
    Ye, Jieping
    NEUROIMAGE, 2012, 61 (03) : 622 - 632
  • [6] Learning from multi-source data
    Fromont, E
    Cordier, MO
    Quiniou, R
    KNOWLEDGE DISCOVERY IN DATABASES: PKDD 2004, PROCEEDINGS, 2004, 3202 : 503 - 505
  • [7] Emotional representation of music in multi-source data by the Internet of Things and deep learning
    Chunqiu Wang
    Young Chun Ko
    The Journal of Supercomputing, 2023, 79 : 349 - 366
  • [8] Emotional representation of music in multi-source data by the Internet of Things and deep learning
    Wang, Chunqiu
    Ko, Young Chun
    JOURNAL OF SUPERCOMPUTING, 2023, 79 (01): : 349 - 366
  • [9] Identifying disruptive technologies by integrating multi-source data
    Liu, Xiwen
    Wang, Xuezhao
    Lyu, Lucheng
    Wang, Yanpeng
    SCIENTOMETRICS, 2022, 127 (09) : 5325 - 5351
  • [10] Identifying disruptive technologies by integrating multi-source data
    Xiwen Liu
    Xuezhao Wang
    Lucheng Lyu
    Yanpeng Wang
    Scientometrics, 2022, 127 : 5325 - 5351