Multi-layer multi-level comprehensive learning for deep multi-view clustering

被引:0
|
作者
机构
[1] Chen, Zhe
[2] Wu, Xiao-Jun
[3] Xu, Tianyang
[4] Li, Hui
[5] Kittler, Josef
关键词
Contrastive Learning;
D O I
10.1016/j.inffus.2024.102785
中图分类号
学科分类号
摘要
Multi-view clustering has attracted widespread attention because of its capability to identify the common semantics shared by the data captured from different views of data, objects or phenomena. This is a challenging problem but with the emergence of deep auto-encoder networks, the performance of multi-view clustering methods has considerably improved. However, it is notable that most existing methods merely utilize the features outputted by the last encoder layer to carry out the clustering task. Such approach neglects potentially useful information conveyed by the features of the previous layers. To address the this problem, we propose a novel multi-layer multi-level comprehensive learning framework for deep multi-view clustering (3MC). 3MC firstly conducts a contrastive learning involving different views based on deep features in each encoder layer separately, so as to achieve multi-view feature consistency. The next step is to construct layer-specific label MLPs to transform the features in each layer to high-level semantic labels. Finally, 3MC conducts an inter-layer contrastive learning using the high-level semantic labels in order to obtain multi-layer consistent clustering assignments. We demonstrate that the proposed comprehensive learning strategy, commencing from layer specific inter-view feature comparison to inter-layer high-level label comparison extracts and utilizes the underlying multi-view complementary information very successfully and achieves more accurate clustering. An extensive experimental comparison with the state-of-the-art methods demonstrates the effectiveness of the proposed framework. The code of this paper is available at https://github.com/chenzhe207/3MC. © 2024
引用
收藏
相关论文
共 50 条
  • [1] Reciprocal Multi-Layer Subspace Learning for Multi-View Clustering
    Li, Ruihuang
    Zhang, Changqing
    Fu, Huazhu
    Peng, Xi
    Zhou, Tianyi
    Hu, Qinghua
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 8171 - 8179
  • [2] Multi-level Feature Learning for Contrastive Multi-view Clustering
    Xu, Jie
    Tang, Huayi
    Ren, Yazhou
    Peng, Liang
    Zhu, Xiaofeng
    He, Lifang
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 16030 - 16039
  • [3] Clean and robust multi-level subspace representations learning for deep multi-view subspace clustering
    Xu, Kaiqiang
    Tang, Kewei
    Su, Zhixun
    Tan, Hongchen
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 252
  • [4] Multi-view deep subspace clustering via level-by-level guided multi-level features learning
    Xu, Kaiqiang
    Tang, Kewei
    Su, Zhixun
    APPLIED INTELLIGENCE, 2024, 54 (21) : 11083 - 11102
  • [5] MCoCo: Multi-level Consistency Collaborative multi-view clustering
    Zhou, Yiyang
    Zheng, Qinghai
    Wang, Yifei
    Yan, Wenbiao
    Shi, Pengcheng
    Zhu, Jihua
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 238
  • [6] Deep Incomplete Multi-view Clustering via Multi-level Imputation and Contrastive Alignment
    Wang, Ziyu
    Du, Yiming
    Wang, Yao
    Ning, Rui
    Li, Lusi
    Neural Networks, 2025, 181
  • [7] Contrastive and adversarial regularized multi-level representation learning for incomplete multi-view clustering
    Wang, Haiyue
    Zhang, Wensheng
    Ma, Xiaoke
    NEURAL NETWORKS, 2024, 172
  • [9] Multi-layer manifold learning for deep non-negative matrix factorization-based multi-view clustering
    Luong, Khanh
    Nayak, Richi
    Balasubramaniam, Thirunavukarasu
    Bashar, Md Abul
    PATTERN RECOGNITION, 2022, 131
  • [10] By multi-layer to multi-level modeling
    Theisz, Zoltan
    Bacsi, Sandor
    Mezei, Gergely
    Somogyi, Ferenc A.
    Palatinszky, Daniel
    2019 ACM/IEEE 22ND INTERNATIONAL CONFERENCE ON MODEL DRIVEN ENGINEERING LANGUAGES AND SYSTEMS COMPANION (MODELS-C 2019), 2019, : 134 - 141