Deep multi-view contrastive learning for cancer subtype identification

被引:5
|
作者
Chen, Wenlan
Wang, Hong [1 ]
Liang, Cheng [1 ]
机构
[1] Shandong Normal Univ, Sch Informat Sci & Engn, Jinan 250358, Peoples R China
基金
中国国家自然科学基金;
关键词
cancer subtype; multi-view; contrastive learning; clustering;
D O I
10.1093/bib/bbad282
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Cancer heterogeneity has posed great challenges in exploring precise therapeutic strategies for cancer treatment. The identification of cancer subtypes aims to detect patients with distinct molecular profiles and thus could provide new clues on effective clinical therapies. While great efforts have been made, it remains challenging to develop powerful computational methods that can efficiently integrate multi-omics datasets for the task. In this paper, we propose a novel self-supervised learning model called Deep Multi-view Contrastive Learning (DMCL) for cancer subtype identification. Specifically, by incorporating the reconstruction loss, contrastive loss and clustering loss into a unified framework, our model simultaneously encodes the sample discriminative information into the extracted feature representations and well preserves the sample cluster structures in the embedded space. Moreover, DMCL is an end-to-end framework where the cancer subtypes could be directly obtained from the model outputs. We compare DMCL with eight alternatives ranging from classic cancer subtype identification methods to recently developed state-of-the-art systems on 10 widely used cancer multi-omics datasets as well as an integrated dataset, and the experimental results validate the superior performance of our method. We further conduct a case study on liver cancer and the analysis results indicate that different subtypes might have different responses to the selected chemotherapeutic drugs.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Contrastive Multi-View Kernel Learning
    Liu, Jiyuan
    Liu, Xinwang
    Yang, Yuexiang
    Liao, Qing
    Xia, Yuanqing
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (08) : 9552 - 9566
  • [2] Multi-view dreaming: multi-view world model with contrastive learning
    Kinose, Akira
    Okumura, Ryo
    Okada, Masashi
    Taniguchi, Tadahiro
    [J]. Advanced Robotics, 2023, 37 (19) : 1212 - 1220
  • [3] Joint contrastive triple-learning for deep multi-view clustering
    Hu, Shizhe
    Zou, Guoliang
    Zhang, Chaoyang
    Lou, Zhengzheng
    Geng, Ruilin
    Ye, Yangdong
    [J]. INFORMATION PROCESSING & MANAGEMENT, 2023, 60 (03)
  • [4] Dual contrastive learning for multi-view clustering
    Bao, Yichen
    Zhao, Wenhui
    Zhao, Qin
    Gao, Quanxue
    Yang, Ming
    [J]. NEUROCOMPUTING, 2024, 599
  • [5] Heterogeneous Graph Contrastive Multi-view Learning
    Wang, Zehong
    Li, Qi
    Yu, Donghua
    Han, Xiaolong
    Gao, Xiao-Zhi
    Shen, Shigen
    [J]. PROCEEDINGS OF THE 2023 SIAM INTERNATIONAL CONFERENCE ON DATA MINING, SDM, 2023, : 136 - 144
  • [6] Multi-view Contrastive Learning Network for Recommendation
    Bu, Xiya
    Ma, Ruixin
    [J]. PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT IX, 2024, 14433 : 319 - 330
  • [7] Multi-View Contrastive Learning from Demonstrations
    Correia, Andre
    Alexandre, Luis A.
    [J]. 2022 SIXTH IEEE INTERNATIONAL CONFERENCE ON ROBOTIC COMPUTING, IRC, 2022, : 338 - 344
  • [8] Contrastive Multi-View Representation Learning on Graphs
    Hassani, Kaveh
    Khasahmadi, Amir Hosein
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [9] Triple-Granularity Contrastive Learning for Deep Multi-View Subspace Clustering
    Wang, Jing
    Feng, Songhe
    Lyu, Gengyu
    Gu, Zhibin
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 2994 - 3002
  • [10] Multi-view Multi-behavior Contrastive Learning in Recommendation
    Wu, Yiqing
    Xie, Ruobing
    Zhu, Yongchun
    Ao, Xiang
    Chen, Xin
    Zhang, Xu
    Zhuang, Fuzhen
    Lin, Leyu
    He, Qing
    [J]. DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, DASFAA 2022, PT II, 2022, : 166 - 182