Deep Cross-Modality and Resolution Graph Integration for Universal Brain Connectivity Mapping and Augmentation

被引:2
|
作者
Cinar, Ece [1 ]
Haseki, Sinem Elif [1 ]
Bessadok, Alaa [1 ]
Rekik, Islem [1 ]
机构
[1] Istanbul Tech Univ, Fac Comp & Informat Engn, BASIRA Lab, Istanbul, Turkey
基金
欧盟地平线“2020”;
关键词
Connectional brain templates; Multi-modal multi-resolution integration; Data augmentation; Graph neural network;
D O I
10.1007/978-3-031-21083-9_9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The connectional brain template (CBT) captures the shared traits across all individuals of a given population of brain connectomes, thereby acting as a fingerprint. Estimating a CBT from a population where brain graphs are derived from diverse neuroimaging modalities (e.g., functional and structural) and at different resolutions (i.e., number of nodes) remains a formidable challenge to solve. Such network integration task allows for learning a rich and universal representation of the brain connectivity across varying modalities and resolutions. The resulting CBT can be substantially used to generate entirely new multimodal brain connectomes, which can boost the learning of the downs-stream tasks such as brain state classification. Here, we propose the Multimodal Multiresolution Brain Graph Integrator Network (i.e., M2GraphIntegrator), the first multimodal multiresolution graph integration framework that maps a given connectomic population into a well-centered CBT. M2GraphIntegrator first unifies brain graph resolutions by utilizing resolution-specific graph autoencoders. Next, it integrates the resulting fixed-size brain graphs into a universal CBT lying at the center of its population. To preserve the population diversity, we further design a novel clustering-based training sample selection strategy which leverages the most heterogeneous training samples. To ensure the biological soundness of the learned CBT, we propose a topological loss that minimizes the topological gap between the ground-truth brain graphs and the learned CBT. Our experiments show that from a single CBT, one can generate realistic connectomic datasets including brain graphs of varying resolutions and modalities. We further demonstrate that our framework significantly outperforms benchmarks in reconstruction quality, augmentation task, centeredness and topological soundness.
引用
收藏
页码:89 / 98
页数:10
相关论文
共 50 条
  • [21] CROSS-MODALITY TEMPORAL RESOLUTION FOR AUDITORY, VIBROTACTILE, AND VISUAL-STIMULI
    SINEX, DG
    [J]. JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 1978, 63 : S52 - S52
  • [22] Integration graph attention network and multi-centre constrained loss for cross-modality person re-identification
    He, Di
    Zhang, Jingrui
    Zhang, Zhong
    Liu, Shuang
    Durrani, Tariq S.
    [J]. IET COMPUTER VISION, 2023, 17 (01) : 76 - 87
  • [23] CROSS-MODALITY AUGMENTATION OF BRAIN MR IMAGES USING A NOVEL PAIRWISE GENERATIVE ADVERSARIAL NETWORK FOR ENHANCED GLIOMA CLASSIFICATION
    Ge, Chenjie
    Gu, Irene Yu-Hua
    Jakola, Asgeir Store
    Yang, Jie
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 559 - 563
  • [24] Cross-Modality Attention with Semantic Graph Embedding for Multi-Label Classification
    You, Renchun
    Guo, Zhiyao
    Cui, Lei
    Long, Xiang
    Bao, Yingze
    Wen, Shilei
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 12709 - 12716
  • [25] Pedestrian Recognition through Different Cross-Modality Deep Learning Methods
    Pop, Danut Ovidiu
    Rogozan, Alexandrina
    Nashashibi, Fawzi
    Bensrhair, Abdelaziz
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON VEHICULAR ELECTRONICS AND SAFETY (ICVES), 2017, : 133 - 138
  • [26] Deep Symmetric Adaptation Network for Cross-Modality Medical Image Segmentation
    Han, Xiaoting
    Qi, Lei
    Yu, Qian
    Zhou, Ziqi
    Zheng, Yefeng
    Shi, Yinghuan
    Gao, Yang
    [J]. IEEE TRANSACTIONS ON MEDICAL IMAGING, 2022, 41 (01) : 121 - 132
  • [27] Cross-Modality Person Re-identification Combined with Data Augmentation and Feature Fusion
    Song, Yu
    Wang, Banghai
    Cao, Ganggang
    [J]. Computer Engineering and Applications, 2024, 60 (04) : 133 - 141
  • [28] Translation, Association and Augmentation: Learning Cross-Modality Re-Identification From Single-Modality Annotation
    Yang, Bin
    Chen, Jun
    Ma, Xianzheng
    Ye, Mang
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 5099 - 5113
  • [29] An Extensive Pixel-Level Augmentation Framework for Unsupervised Cross-Modality Domain Adaptation
    Baldeon Calisto, Maria G.
    Lai-Yuen, Susana K.
    [J]. MEDICAL IMAGING 2023, 2023, 12464
  • [30] Generalizable Cross-modality Medical Image Segmentation via Style Augmentation and Dual Normalization
    Zhou, Ziqi
    Qi, Lei
    Yang, Xin
    Ni, Dong
    Shi, Yinghuan
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 20824 - 20833