GenURL: A General Framework for Unsupervised Representation Learning

被引:0
|
作者
Li, Siyuan [1 ]
Liu, Zicheng [1 ]
Zang, Zelin [2 ]
Wu, Di [2 ]
Chen, Zhiyuan [2 ]
Li, Stan Z. [2 ]
机构
[1] Zhejiang Univ, Hangzhou 310000, Peoples R China
[2] Westlake Univ, Sch Engn, AI Div, Hangzhou 310030, Peoples R China
基金
中国国家自然科学基金;
关键词
Task analysis; Uniform resource locators; Germanium; Data models; Manifolds; Data structures; Representation learning; Contrastive learning (CL); dimension reduction (DR); graph embedding (GE); knowledge distillation (KD); self-supervised learning; NONLINEAR DIMENSIONALITY REDUCTION;
D O I
10.1109/TNNLS.2023.3332087
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised representation learning (URL) that learns compact embeddings of high-dimensional data without supervision has achieved remarkable progress recently. However, the development of URLs for different requirements is independent, which limits the generalization of the algorithms, especially prohibitive as the number of tasks grows. For example, dimension reduction (DR) methods, t-SNE and UMAP, optimize pairwise data relationships by preserving the global geometric structure, while self-supervised learning, SimCLR and BYOL, focuses on mining the local statistics of instances under specific augmentations. To address this dilemma, we summarize and propose a unified similarity-based URL framework, GenURL, which can adapt to various URL tasks smoothly. In this article, we regard URL tasks as different implicit constraints on the data geometric structure that help to seek optimal low-dimensional representations that boil down to data structural modeling (DSM) and low-dimensional transformation (LDT). Specifically, DSM provides a structure-based submodule to describe the global structures, and LDT learns compact low-dimensional embeddings with given pretext tasks. Moreover, an objective function, general Kullback-Leibler (GKL) divergence, is proposed to connect DSM and LDT naturally. Comprehensive experiments demonstrate that GenURL achieves consistent state-of-the-art performance in self-supervised visual learning, unsupervised knowledge distillation (KD), graph embeddings (GEs), and DR.
引用
收藏
页码:286 / 298
页数:13
相关论文
共 50 条
  • [31] A general framework for unsupervised processing of structured data
    Hammer, B
    Micheli, A
    Sperduti, A
    Strickert, M
    NEUROCOMPUTING, 2004, 57 : 3 - 35
  • [32] Unsupervised learning in general connectionist systems
    Dente, JA
    Mendes, RV
    NETWORK-COMPUTATION IN NEURAL SYSTEMS, 1996, 7 (01) : 123 - 139
  • [33] Unsupervised Heterogeneous Coupling Learning for Categorical Representation
    Zhu, Chengzhang
    Cao, Longbing
    Yin, Jianping
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (01) : 533 - 549
  • [34] Unsupervised Representation Learning by Predicting Random Distances
    Wang, Hu
    Pang, Guansong
    Shen, Chunhua
    Ma, Congbo
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 2950 - 2956
  • [35] Lovasz Principle for Unsupervised Graph Representation Learning
    Sun, Ziheng
    Ding, Chris
    Fan, Jicong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [36] A Comparative Study for Unsupervised Network Representation Learning
    Khosla, Megha
    Setty, Vinay
    Anand, Avishek
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2021, 33 (05) : 1807 - 1818
  • [37] Unsupervised Representation Learning for Visual Robotics Grasping
    Wang, Shaochen
    Zhou, Zhangli
    Wang, Hao
    Li, Zhijun
    Kan, Zhen
    2022 INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND MECHATRONICS (ICARM 2022), 2022, : 57 - 62
  • [38] An unsupervised learning framework for marketneutral portfolio
    Cuomo, Salvatore
    Gatta, Federico
    Giampaolo, Fabio
    Iorio, Carmela
    Piccialli, Francesco
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 192
  • [39] UNSUPERVISED REPRESENTATION LEARNING OF SPEECH FOR DIALECT IDENTIFICATION
    Shon, Suwon
    Hsu, Wei-Ning
    Glass, James
    2018 IEEE WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGY (SLT 2018), 2018, : 105 - 111
  • [40] Unsupervised efficient learning and representation of language structure
    Solan, Z
    Horn, D
    Ruppin, E
    Edelman, S
    PROCEEDINGS OF THE TWENTY-FIFTH ANNUAL CONFERENCE OF THE COGNITIVE SCIENCE SOCIETY, PTS 1 AND 2, 2003, : 1106 - 1111