Dual representation learning for one-step clustering of multi-view data

被引:0
|
作者
Wei Zhang [1 ]
Zhaohong Deng [2 ]
Kup-Sze Choi [2 ]
Jun Wang [3 ]
Shitong Wang [4 ]
机构
[1] Nantong University,School of Artificial Intelligence and Computer Science
[2] Jiangnan University,School of Artificial Intelligence and Computer Science
[3] Jiangsu Key Laboratory of Media Design and Software Technology,The Centre for Smart Health
[4] The Hong Kong Polytechnic University,School of Communication and Information Engineering
[5] Shanghai University,undefined
关键词
Multi-view data; Dual representation learning; Consistent knowledge; Unique knowledge; One-step clustering;
D O I
10.1007/s10462-025-11183-0
中图分类号
学科分类号
摘要
In real-world applications, multi-view data is widely available and multi-view learning is an effective method for mining multi-view data. In recent years, multi-view clustering, as an important part of multi-view learning, has been receiving more and more attention, while how to design an effective multi-view data mining method and make it more pertinent for clustering is still a challenging mission. For this purpose, a new one-step multi-view clustering method with dual representation learning is proposed in this paper. First, based on the fact that multi-view data contain both consistent knowledge between views and unique knowledge of each view, we propose a new dual representation learning method by improving the matrix factorization to explore them and to form common and specific representations. Then, we design a novel one-step multi-view clustering framework, which unifies the dual representation learning and multi-view clustering partition into one process. In this way, a mutual self-taught mechanism is developed in this framework and leads to more promising clustering performance. Finally, we also introduce the maximum entropy and orthogonal constraint to achieve optimal clustering results. Extensive experiments on seven real world multi-view datasets demonstrate the effectiveness of the proposed method.
引用
收藏
相关论文
共 50 条
  • [41] Multi-view clustering via efficient representation learning with anchors
    Yu, Xiao
    Liu, Hui
    Zhang, Yan
    Sun, Shanbao
    Zhang, Caiming
    PATTERN RECOGNITION, 2023, 144
  • [42] Dynamic guided metric representation learning for multi-view clustering
    Zheng, Tingyi
    Zhang, Yilin
    Wang, Yuhang
    PeerJ Computer Science, 2022, 8
  • [43] SLRL: Structured Latent Representation Learning for Multi-view Clustering
    Xiong, Zhangci
    Cao, Meng
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT 1, 2025, 15031 : 551 - 566
  • [44] Spectral representation learning for one-step spectral rotation clustering
    Wen, Guoqiu
    Zhu, Yonghua
    Zheng, Wei
    NEUROCOMPUTING, 2020, 406 : 361 - 370
  • [45] Multi-View Robust Feature Learning for Data Clustering
    Zhao, Liang
    Zhao, Tianyang
    Sun, Tingting
    Liu, Zhuo
    Chen, Zhikui
    IEEE SIGNAL PROCESSING LETTERS, 2020, 27 : 1750 - 1754
  • [46] Smooth representation learning from multi-view data
    Huang, Shudong
    Liu, Yixi
    Cai, Hecheng
    Tan, Yuze
    Tang, Chenwei
    Lv, Jiancheng
    INFORMATION FUSION, 2023, 100
  • [47] One-step graph-based multi-view clustering via specific and unified nonnegative embeddings
    El Hajjar, Sally
    Abdallah, Fahed
    Omrani, Hichem
    Chaaban, Alain Khaled
    Arif, Muhammad
    Alturki, Ryan
    Alghamdi, Mohammed J.
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, 15 (12) : 5807 - 5822
  • [48] Dual Consensus Anchor Learning for Fast Multi-View Clustering
    Qin, Yalan
    Qin, Chuan
    Zhang, Xinpeng
    Feng, Guorui
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 5298 - 5311
  • [49] Multi-View Representation Learning via Dual Optimal Transportation
    Li, Peng
    Gao, Jing
    Zhai, Bin
    Zhang, Jianing
    Chen, Zhikui
    IEEE ACCESS, 2021, 9 : 144976 - 144984
  • [50] Multi-view representation learning for multi-view action recognition
    Hao, Tong
    Wu, Dan
    Wang, Qian
    Sun, Jin-Sheng
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2017, 48 : 453 - 460