Self-attention Multi-view Representation Learning with Diversity-promoting Complementarity

被引:0
|
作者
Liu, Jian-wei [1 ]
Ding, Xi-hao [1 ]
Lu, Run-kun [1 ]
Luo, Xionglin [1 ]
机构
[1] China Univ Petr, Sch Informat Sci & Engn, Dept Automat, Beijing 102249, Peoples R China
关键词
Multi-view Learning; Self-attention Mechanism; Complementary Information with Diversity;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multi-view learning attempts to generate a model with a better performance by exploiting the consensus and/or complementarity among multi-view data. However, in terms of complementarity, most existing approaches only can find representations with single complementarity rather than complementary information with diversity. In this paper, to utilize both complementarity and consistency simultaneously, give free rein to the potential of deep learning in grasping diversity-promoting complementarity for multi-view representation learning, we propose a novel supervised multi-view representation learning algorithm, called Self-Attention Multi-View network with Diversity-Promoting Complementarity (SAMVDPC), which exploits the consistency by a group of encoders, uses self-attention to find complementary information entailing diversity. Extensive experiments conducted on eight real-world datasets have demonstrated the effectiveness of our proposed method, and show its superiority over several baseline methods, which only consider single complementary information.
引用
收藏
页码:3972 / 3978
页数:7
相关论文
共 50 条
  • [1] Diversity-promoting multi-view graph learning for semi-supervised classification
    Shanhua Zhan
    Weijun Sun
    Cuifeng Du
    Weifang Zhong
    [J]. International Journal of Machine Learning and Cybernetics, 2021, 12 : 2843 - 2857
  • [2] Diversity-promoting multi-view graph learning for semi-supervised classification
    Zhan, Shanhua
    Sun, Weijun
    Du, Cuifeng
    Zhong, Weifang
    [J]. INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2021, 12 (10) : 2843 - 2857
  • [3] Multi-view self-attention networks
    Xu, Mingzhou
    Yang, Baosong
    Wong, Derek F.
    Chao, Lidia S.
    [J]. KNOWLEDGE-BASED SYSTEMS, 2022, 241
  • [4] MULTI-VIEW SELF-ATTENTION BASED TRANSFORMER FOR SPEAKER RECOGNITION
    Wang, Rui
    Ao, Junyi
    Zhou, Long
    Liu, Shujie
    Wei, Zhihua
    Ko, Tom
    Li, Qing
    Zhang, Yu
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 6732 - 6736
  • [5] Multi-view 3D Reconstruction with Self-attention
    Qian, Qiuting
    [J]. 2021 14TH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTER THEORY AND ENGINEERING (ICACTE 2021), 2021, : 20 - 26
  • [6] Multi-View Group Recommendation Integrating Self-Attention and Graph Convolution
    Wang, Yonggui
    Wang, Xinru
    [J]. Computer Engineering and Applications, 2024, 60 (08) : 287 - 295
  • [7] Multi-view representation learning for multi-view action recognition
    Hao, Tong
    Wu, Dan
    Wang, Qian
    Sun, Jin-Sheng
    [J]. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2017, 48 : 453 - 460
  • [8] Incomplete multi-view clustering via self-attention networks and feature reconstruction
    Zhang, Yong
    Jiang, Li
    Liu, Da
    Liu, Wenzhe
    [J]. APPLIED INTELLIGENCE, 2024, 54 (04) : 2998 - 3016
  • [9] Multi-view self-attention for interpretable drug-target interaction prediction
    Agyemang, Brighter
    Wu, Wei-Ping
    Kpiebaareh, Michael Yelpengne
    Lei, Zhihua
    Nanor, Ebenezer
    Chen, Lei
    [J]. JOURNAL OF BIOMEDICAL INFORMATICS, 2020, 110
  • [10] Incomplete multi-view clustering via self-attention networks and feature reconstruction
    Yong Zhang
    Li Jiang
    Da Liu
    Wenzhe Liu
    [J]. Applied Intelligence, 2024, 54 : 2998 - 3016