Decoupled representation for multi-view learning

被引:0
|
作者
Sun, Shiding [1 ,2 ,3 ]
Wang, Bo [2 ]
Tian, Yingjie [3 ,4 ,5 ,6 ]
机构
[1] Univ Chinese Acad Sci, Sch Math Sci, Beijing 100049, Peoples R China
[2] Univ Int Business & Econ, Sch Informat Technol & Management, Beijing 100029, Peoples R China
[3] Univ Chinese Acad Sci, Sch Econ & Management, Beijing 100190, Peoples R China
[4] Chinese Acad Sci, Res Ctr Fictitious Econ & Data Sci, Beijing 100190, Peoples R China
[5] Chinese Acad Sci, Lab Big Data Min & Knowledge Management, Beijing 100190, Peoples R China
[6] UCAS, MOE Social Sci Lab Digital Econ Forecasts & Policy, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Multi-view learning; Representation learning; Information bottleneck; Contrastive learning;
D O I
10.1016/j.patcog.2024.110377
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning multi -view data is a central topic for advanced deep model applications. Existing efforts mainly focus on exploring shared information to maximize the consensus among all the views. However, after reasonably discarding superfluous task -irrelevant noise, the view -specific information is equally essential to downstream tasks. In this paper, we propose to decouple the multi -view representation learning into the shared and specific information extractions with parallel branches, and seamlessly adopt feature fusion in end -to -end models. The common feature is obtained based on the view -agnostic contrastive learning and view -discriminative training to minimize the discrepancy within the views. Simultaneously, the specific feature is learned with orthogonality constraints to minimize the view -level correlation. Besides, the semantic information in the features is reserved with supervised training. After disentangling the representations, we fuse the mutually complementary common and specific features for downstream tasks. Particularly, we provide a theoretical explanation for our method from an information bottleneck perspective. Compared with state-of-the-art multi -view models on benchmark datasets, we empirically demonstrate the advantage of our method in several downstream tasks, such as ordinary classification and few -shot learning. In addition, extensive experiments validate the robustness and transferability of our approach, when applying the learned representation on the source dataset to several target datasets.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] Joint representation learning for multi-view subspace clustering
    Zhang, Guang-Yu
    Zhou, Yu-Ren
    Wang, Chang-Dong
    Huang, Dong
    He, Xiao-Yu
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2021, 166
  • [32] Deep multi-view representation learning for social images
    Huang, Feiran
    Zhang, Xiaoming
    Zhao, Zhonghua
    Li, Zhoujun
    He, Yueying
    [J]. APPLIED SOFT COMPUTING, 2018, 73 : 106 - 118
  • [33] Uncertainty-Aware Multi-View Representation Learning
    Geng, Yu
    Han, Zongbo
    Zhang, Changqing
    Hu, Qinghua
    [J]. THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 7545 - 7553
  • [34] Instance-wise multi-view representation learning
    Li, Dan
    Wang, Haibao
    Wang, Yufeng
    Wang, Shengpei
    [J]. INFORMATION FUSION, 2023, 91 : 612 - 622
  • [35] Flexible Multi-View Representation Learning for Subspace Clustering
    Li, Ruihuang
    Zhang, Changqing
    Hu, Qinghua
    Zhu, Pengfei
    Wang, Zheng
    [J]. PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 2916 - 2922
  • [36] Representation Learning in Multi-view Clustering: A Literature Review
    Chen, Man-Sheng
    Lin, Jia-Qi
    Li, Xiang-Long
    Liu, Bao-Yu
    Wang, Chang-Dong
    Huang, Dong
    Lai, Jian-Huang
    [J]. DATA SCIENCE AND ENGINEERING, 2022, 7 (03) : 225 - 241
  • [37] Representation Learning in Multi-view Clustering: A Literature Review
    Man-Sheng Chen
    Jia-Qi Lin
    Xiang-Long Li
    Bao-Yu Liu
    Chang-Dong Wang
    Dong Huang
    Jian-Huang Lai
    [J]. Data Science and Engineering, 2022, 7 : 225 - 241
  • [38] mulEEG: A Multi-view Representation Learning on EEG Signals
    Kumar, Vamsi
    Reddy, Likith
    Sharma, Shivam Kumar
    Dadi, Kamalaker
    Yarra, Chiranjeevi
    Bapi, Raju S.
    Rajendran, Srijithesh
    [J]. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT III, 2022, 13433 : 398 - 407
  • [39] Adaptive Latent Representation for Multi-view Subspace Learning
    Zhang, Yuemei
    Wang, Xiumei
    Gao, Xinbo
    [J]. 2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 1229 - 1234
  • [40] Learning topographic representation for multi-view image patterns
    Li, SZ
    Lv, XG
    Zhang, HJ
    Fu, QD
    Cheng, YM
    [J]. 2001 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOLS I-VI, PROCEEDINGS: VOL I: SPEECH PROCESSING 1; VOL II: SPEECH PROCESSING 2 IND TECHNOL TRACK DESIGN & IMPLEMENTATION OF SIGNAL PROCESSING SYSTEMS NEURALNETWORKS FOR SIGNAL PROCESSING; VOL III: IMAGE & MULTIDIMENSIONAL SIGNAL PROCESSING MULTIMEDIA SIGNAL PROCESSING, 2001, : 1329 - 1332