Multi-view Low-rank Preserving Embedding: A novel method for multi-view representation

被引:9
|
作者
Meng, Xiangzhu [1 ]
Feng, Lin [1 ]
Wang, Huibing [2 ]
机构
[1] Dalian Univ Technol, Sch Comp Sci & Technol, Dalian 116024, Peoples R China
[2] Dalian Maritime Univ, Informat Sci & Technol Coll, Dalian 116024, Peoples R China
关键词
Multi-view learning; Low-rank preserving; Latent space; Iterative alternating strategy;
D O I
10.1016/j.engappai.2020.104140
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, we have witnessed a surge of interest in multi-view representation learning. When facing multiple views that are highly related but sightly different from each other, most existing multi-view methods might fail to fully explore multi-view information. Additionally, pairwise correlations among multiple views often vary drastically, which makes multi-view representation challenging. Therefore, how to learn appropriate representation from multi-view information is still an open but challenging problem. To handle this issue, this paper proposes a novel multi-view learning method, named Multi-view Low-rank Preserving Embedding (MvLPE). It integrates all views into a common latent space, termed as centroid view, by minimizing the disagreement between centroid view and each view, which encourages different views to mutually learn from each other. Unlike existing methods with explicit weight definition, the proposed method could automatically allocate an ideal weight for each view according to its contribution. Besides, MvLPE could maintain its low-rank reconstruction structure for each view while integrating all views into centroid view. Since there is no closed-form solution for MvLPE, an effective algorithm based on iterative alternating strategy is provided to obtain the solution. The experiments on six benchmark datasets validate the effectiveness of the proposed method, which achieves superior performance over its counterparts.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Robust multi-view low-rank embedding clustering
    Dai, Jian
    Song, Hong
    Luo, Yunzhi
    Ren, Zhenwen
    Yang, Jian
    [J]. NEURAL COMPUTING & APPLICATIONS, 2023, 35 (10): : 7877 - 7890
  • [2] Robust multi-view low-rank embedding clustering
    Jian Dai
    Hong Song
    Yunzhi Luo
    Zhenwen Ren
    Jian Yang
    [J]. Neural Computing and Applications, 2023, 35 : 7877 - 7890
  • [3] Multi-view Locality Low-rank Embedding for Dimension Reduction
    Feng, Lin
    Meng, Xiangzhu
    Wang, Huibing
    [J]. KNOWLEDGE-BASED SYSTEMS, 2020, 191
  • [4] Spectral Embedding and Novel Low-rank Approximation Based Multi-view Clustering
    Liu, Xiaobo
    Long, Yin
    Nomikos, Yiannis
    [J]. 2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 840 - 846
  • [5] Deep low-rank tensor embedding for multi-view subspace clustering
    Liu, Zhaohu
    Song, Peng
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2024, 237
  • [6] Mixed structure low-rank representation for multi-view subspace clustering
    Shouhang Wang
    Yong Wang
    Guifu Lu
    Wenge Le
    [J]. Applied Intelligence, 2023, 53 : 18470 - 18487
  • [7] Weighted Low-Rank Tensor Representation for Multi-View Subspace Clustering
    Wang, Shuqin
    Chen, Yongyong
    Zheng, Fangying
    [J]. FRONTIERS IN PHYSICS, 2021, 8
  • [8] Adaptive Weighted Low-Rank Sparse Representation for Multi-View Clustering
    Khan, Mohammad Ahmar
    Khan, Ghufran Ahmad
    Khan, Jalaluddin
    Anwar, Taushif
    Ashraf, Zubair
    Atoum, Ibrahim A. A.
    Ahmad, Naved
    Shahid, Mohammad
    Ishrat, Mohammad
    Alghamdi, Abdulrahman Abdullah
    [J]. IEEE ACCESS, 2023, 11 : 60681 - 60692
  • [9] LOW-RANK AND SPARSE TENSOR REPRESENTATION FOR MULTI-VIEW SUBSPACE CLUSTERING
    Wang, Shuqin
    Chen, Yongyong
    Cen, Yigang
    Zhang, Linna
    Voronin, Viacheslav
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 1534 - 1538
  • [10] Mixed structure low-rank representation for multi-view subspace clustering
    Wang, Shouhang
    Wang, Yong
    Lu, Guifu
    Le, Wenge
    [J]. APPLIED INTELLIGENCE, 2023, 53 (15) : 18470 - 18487