Cross-modal alignment and contrastive learning for enhanced cancer survival prediction

被引:0
|
作者
Li, Tengfei [1 ]
Zhou, Xuezhong [1 ]
Xue, Jingyan [1 ]
Zeng, Lili [1 ]
Zhu, Qiang [1 ]
Wang, Ruiping [1 ]
Yu, Haibin [2 ]
Xia, Jianan [1 ]
机构
[1] Beijing Jiaotong Univ, Sch Comp Sci & Technol, Beijing 100044, Peoples R China
[2] Henan Univ Chinese Med, Affiliated Hosp 1, Zhengzhou 450000, Henan, Peoples R China
基金
中国国家自然科学基金;
关键词
Survival prediction; Histopathological image; Multi-omics; Multi-modal fusion; MODEL;
D O I
10.1016/j.cmpb.2025.108633
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Background and Objective: Integrating multimodal data, such as pathology images and genomics, is crucial for understanding cancer heterogeneity, personalized treatment complexity, and enhancing survival prediction. However, most current prognostic methods are limited to a single domain of histopathology or genomics, inevitably reducing their potential for accurate patient outcome prediction. Despite advancements in the concurrent analysis of pathology and genomic data, existing approaches inadequately address the intricate intermodal relationships. Methods: This paper introduces the CPathomic method for multimodal data-based survival prediction. By leveraging whole slide pathology images to guide local pathological features, the method effectively mitigates significant intermodal differences through a cross-modal representational contrastive learning module. Furthermore, it facilitates interactive learning between different modalities through cross-modal and gated attention modules. Results: The extensive experiments on five public TCGA datasets demonstrate that CPathomic framework effectively bridges modality gaps, consistently outperforming alternative multimodal survival prediction methods. Conclusion: The model we propose, CPathomic, unveils the potential of contrastive learning and cross-modal attention in the representation and fusion of multimodal data, enhancing the performance of patient survival prediction.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] CROCO: CROSS-MODAL CONTRASTIVE LEARNING FOR LOCALIZATION OF EARTH OBSERVATION DATA
    Tseng, Wei-Hsin
    Hoang-An Le
    Boulch, Alexandre
    Lefevre, Sebastien
    Tiede, Dirk
    XXIV ISPRS CONGRESS IMAGING TODAY, FORESEEING TOMORROW, COMMISSION II, 2022, 5-2 : 415 - 421
  • [32] IMPROVING CROSS-MODAL UNDERSTANDING IN VISUAL DIALOG VIA CONTRASTIVE LEARNING
    Chen, Feilong
    Chen, Xiuyi
    Xu, Shuang
    Xu, Bo
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 7937 - 7941
  • [33] Supervised Contrastive Learning for 3D Cross-Modal Retrieval
    Choo, Yeon-Seung
    Kim, Boeun
    Kim, Hyun-Sik
    Park, Yong-Suk
    APPLIED SCIENCES-BASEL, 2024, 14 (22):
  • [34] Query Aware Dual Contrastive Learning Network for Cross-modal Retrieval
    Yin M.-R.
    Liang M.-Y.
    Yu Y.
    Cao X.-W.
    Du J.-P.
    Xue Z.
    Ruan Jian Xue Bao/Journal of Software, 2024, 35 (05): : 2120 - 2132
  • [35] Cross-modal contrastive learning for unified placenta analysis using photographs
    Pan, Yimu
    Mehta, Manas
    Goldstein, Jeffery A.
    Ngonzi, Joseph
    Bebell, Lisa M.
    Roberts, Drucilla J.
    Carreon, Chrystalle Katte
    Gallagher, Kelly
    Walker, Rachel E.
    Gernand, Alison D.
    Wang, James Z.
    PATTERNS, 2024, 5 (12):
  • [36] MULTI-LEVEL CONTRASTIVE LEARNING FOR HYBRID CROSS-MODAL RETRIEVAL
    Zhao, Yiming
    Lu, Haoyu
    Zhao, Shiqi
    Wu, Haoran
    Lu, Zhiwu
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 6390 - 6394
  • [37] Collaboratively Semantic Alignment and Metric Learning for Cross-Modal Hashing
    Li, Jiaxing
    Wong, Wai Keung
    Jiang, Lin
    Jiang, Kaihang
    Fang, Xiaozhao
    Xie, Shengli
    Wen, Jie
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2025, 37 (05) : 2311 - 2328
  • [38] Contrastive cross-modal clustering with twin network
    Mao, Yiqiao
    Yan, Xiaoqiang
    Hu, Shizhe
    Ye, Yangdong
    PATTERN RECOGNITION, 2024, 155
  • [39] Editorial: Cross-Modal Learning: Adaptivity, Prediction and Interaction
    Zhang, Jianwei
    Wermter, Stefan
    Sun, Fuchun
    Zhang, Changshui
    Engel, Andreas K.
    Roeder, Brigitte
    Fu, Xiaolan
    Xue, Gui
    FRONTIERS IN NEUROROBOTICS, 2022, 16
  • [40] Contrastive Label Correlation Enhanced Unified Hashing Encoder for Cross-modal Retrieval
    Wu, Hongfa
    Zhang, Lisai
    Chen, Qingcai
    Deng, Yimeng
    Siebert, Joanna
    Han, Yunpeng
    Li, Zhonghua
    Kong, Dejiang
    Cao, Zhao
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 2158 - 2168