Deep Cross-Modality and Resolution Graph Integration for Universal Brain Connectivity Mapping and Augmentation

被引:2
|
作者
Cinar, Ece [1 ]
Haseki, Sinem Elif [1 ]
Bessadok, Alaa [1 ]
Rekik, Islem [1 ]
机构
[1] Istanbul Tech Univ, Fac Comp & Informat Engn, BASIRA Lab, Istanbul, Turkey
基金
欧盟地平线“2020”;
关键词
Connectional brain templates; Multi-modal multi-resolution integration; Data augmentation; Graph neural network;
D O I
10.1007/978-3-031-21083-9_9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The connectional brain template (CBT) captures the shared traits across all individuals of a given population of brain connectomes, thereby acting as a fingerprint. Estimating a CBT from a population where brain graphs are derived from diverse neuroimaging modalities (e.g., functional and structural) and at different resolutions (i.e., number of nodes) remains a formidable challenge to solve. Such network integration task allows for learning a rich and universal representation of the brain connectivity across varying modalities and resolutions. The resulting CBT can be substantially used to generate entirely new multimodal brain connectomes, which can boost the learning of the downs-stream tasks such as brain state classification. Here, we propose the Multimodal Multiresolution Brain Graph Integrator Network (i.e., M2GraphIntegrator), the first multimodal multiresolution graph integration framework that maps a given connectomic population into a well-centered CBT. M2GraphIntegrator first unifies brain graph resolutions by utilizing resolution-specific graph autoencoders. Next, it integrates the resulting fixed-size brain graphs into a universal CBT lying at the center of its population. To preserve the population diversity, we further design a novel clustering-based training sample selection strategy which leverages the most heterogeneous training samples. To ensure the biological soundness of the learned CBT, we propose a topological loss that minimizes the topological gap between the ground-truth brain graphs and the learned CBT. Our experiments show that from a single CBT, one can generate realistic connectomic datasets including brain graphs of varying resolutions and modalities. We further demonstrate that our framework significantly outperforms benchmarks in reconstruction quality, augmentation task, centeredness and topological soundness.
引用
收藏
页码:89 / 98
页数:10
相关论文
共 50 条
  • [1] Cross-modality deep feature learning for brain tumor segmentation
    Zhang, Dingwen
    Huang, Guohai
    Zhang, Qiang
    Han, Jungong
    Han, Junwei
    Yu, Yizhou
    [J]. PATTERN RECOGNITION, 2021, 110
  • [2] ContextDesc: Local Descriptor Augmentation with Cross-Modality Context
    Luo, Zixin
    Shen, Tianwei
    Zhou, Lei
    Zhang, Jiahui
    Yao, Yao
    Li, Shiwei
    Fang, Tian
    Quan, Long
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 2522 - 2531
  • [3] Comparisons of cross-modality integration in midbrain and cortex
    Stein, BE
    Wallace, MT
    [J]. EXTRAGENICULOSTRIATE MECHANISMS UNDERLYING VISUALLY-GUIDED ORIENTATION BEHAVIOR, 1996, 112 : 289 - 299
  • [4] Cross-Modality Deep Learning Achieves Super-Resolution in Fluorescence Microscopy
    Wang, Hongda
    Rivenson, Yair
    Jin, Yiyin
    Wei, Zhensong
    Gao, Ronald
    Gunaydin, Harun
    Bentolila, Laurent A.
    Kural, Comert
    Ozcan, Aydogan
    [J]. 2019 CONFERENCE ON LASERS AND ELECTRO-OPTICS (CLEO), 2019,
  • [5] Deep learning enables cross-modality super-resolution in fluorescence microscopy
    Wang, Hongda
    Rivenson, Yair
    Jin, Yiyin
    Wei, Zhensong
    Gao, Ronald
    Gunaydin, Harun
    Bentolila, Laurent A.
    Kural, Comert
    Ozcan, Aydogan
    [J]. NATURE METHODS, 2019, 16 (01) : 103 - +
  • [6] Deep learning enables cross-modality super-resolution in fluorescence microscopy
    Hongda Wang
    Yair Rivenson
    Yiyin Jin
    Zhensong Wei
    Ronald Gao
    Harun Günaydın
    Laurent A. Bentolila
    Comert Kural
    Aydogan Ozcan
    [J]. Nature Methods, 2019, 16 : 103 - 110
  • [7] Single Pair Cross-Modality Super Resolution
    Shacht, Guy
    Danon, Dov
    Fogel, Sharon
    Cohen-Or, Daniel
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 6374 - 6383
  • [8] Unsupervised Deep Cross-modality Spectral Hashing
    Hoang, Tuan
    Do, Thanh-Toan
    Nguyen, Tam V.
    Cheung, Ngai-Man
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 8391 - 8406
  • [9] CrossHomo: Cross-Modality and Cross-Resolution Homography Estimation
    Deng, Xin
    Liu, Enpeng
    Gao, Chao
    Li, Shengxi
    Gu, Shuhang
    Xu, Mai
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (08) : 5725 - 5742
  • [10] SENTENCE AND PICTURE MEMORY - CROSS-MODALITY SEMANTIC INTEGRATION
    PEZDEK, K
    MARSH, G
    [J]. BULLETIN OF THE PSYCHONOMIC SOCIETY, 1975, 6 (NB4) : 435 - 435