GenURL: A General Framework for Unsupervised Representation Learning

被引:0
|
作者
Li, Siyuan [1 ]
Liu, Zicheng [1 ]
Zang, Zelin [2 ]
Wu, Di [2 ]
Chen, Zhiyuan [2 ]
Li, Stan Z. [2 ]
机构
[1] Zhejiang Univ, Hangzhou 310000, Peoples R China
[2] Westlake Univ, Sch Engn, AI Div, Hangzhou 310030, Peoples R China
关键词
Contrastive learning (CL); dimension reduction (DR); graph embedding (GE); knowledge distillation (KD); self-supervised learning; NONLINEAR DIMENSIONALITY REDUCTION;
D O I
10.1109/TNNLS.2023.3332087
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised representation learning (URL) that learns compact embeddings of high-dimensional data without supervision has achieved remarkable progress recently. However, the development of URLs for different requirements is independent, which limits the generalization of the algorithms, especially prohibitive as the number of tasks grows. For example, dimension reduction (DR) methods, t-SNE and UMAP, optimize pairwise data relationships by preserving the global geometric structure, while self-supervised learning, SimCLR and BYOL, focuses on mining the local statistics of instances under specific augmentations. To address this dilemma, we summarize and propose a unified similarity-based URL framework, GenURL, which can adapt to various URL tasks smoothly. In this article, we regard URL tasks as different implicit constraints on the data geometric structure that help to seek optimal low-dimensional representations that boil down to data structural modeling (DSM) and low-dimensional transformation (LDT). Specifically, DSM provides a structure-based submodule to describe the global structures, and LDT learns compact low-dimensional embeddings with given pretext tasks. Moreover, an objective function, general Kullback-Leibler (GKL) divergence, is proposed to connect DSM and LDT naturally. Comprehensive experiments demonstrate that GenURL achieves consistent state-of-the-art performance in self-supervised visual learning, unsupervised knowledge distillation (KD), graph embeddings (GEs), and DR.
引用
收藏
页码:1 / 13
页数:13
相关论文
共 50 条
  • [1] MoCoUTRL: a momentum contrastive framework for unsupervised text representation learning
    Zou, Ao
    Hao, Wenning
    Jin, Dawei
    Chen, Gang
    Sun, Feiyan
    CONNECTION SCIENCE, 2023, 35 (01)
  • [2] DRLnet: Deep Difference Representation Learning Network and An Unsupervised Optimization Framework
    Zhang, Puzhao
    Gong, Maoguo
    Zhang, Hui
    Liu, Jia
    PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 3413 - 3419
  • [3] A General Representation Learning Framework with Generalization Performance Guarantees
    Cui, Junbiao
    Liang, Jianqing
    Yue, Qin
    Liang, Jiye
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [4] Federated unsupervised representation learning
    Zhang, Fengda
    Kuang, Kun
    Chen, Long
    You, Zhaoyang
    Shen, Tao
    Xiao, Jun
    Zhang, Yin
    Wu, Chao
    Wu, Fei
    Zhuang, Yueting
    Li, Xiaolin
    FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING, 2023, 24 (08) : 1181 - 1193
  • [5] Continual Unsupervised Representation Learning
    Rao, Dushyant
    Visin, Francesco
    Rusu, Andrei A.
    Teh, Yee Whye
    Pascanu, Razvan
    Hadsell, Raia
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [6] A Shapelet-based Framework for Unsupervised Multivariate Time Series Representation Learning
    Liang, Zhiyu
    Zhang, Jianfeng
    Liang, Chen
    Wang, Hongzhi
    Liang, Zheng
    Pan, Lujia
    PROCEEDINGS OF THE VLDB ENDOWMENT, 2023, 17 (03): : 386 - 399
  • [7] Usr-mtl: an unsupervised sentence representation learning framework with multi-task learning
    Wenshen Xu
    Shuangyin Li
    Yonghe Lu
    Applied Intelligence, 2021, 51 : 3506 - 3521
  • [8] Usr-mtl: an unsupervised sentence representation learning framework with multi-task learning
    Xu, Wenshen
    Li, Shuangyin
    Lu, Yonghe
    APPLIED INTELLIGENCE, 2021, 51 (06) : 3506 - 3521
  • [9] An Unsupervised Deep Learning Framework via Integrated Optimization of Representation Learning and GMM-Based Modeling
    Wang, Jinghua
    Jiang, Jianmin
    COMPUTER VISION - ACCV 2018, PT I, 2019, 11361 : 249 - 265
  • [10] Masked Scene Contrast: A Scalable Framework for Unsupervised 3D Representation Learning
    Wu, Xiaoyang
    Wen, Xin
    Liu, Xihui
    Zhao, Hengshuang
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 9415 - 9424