Self-Paced Unified Representation Learning for Hierarchical Multi-Label Classification

被引:0
|
作者
Yuan, Zixuan [1 ]
Liu, Hao [1 ]
Zhou, Haoyi [2 ]
Zhang, Denghui [3 ]
Zhang, Xiao [4 ]
Wang, Hao [5 ]
Xiong, Hui [1 ]
机构
[1] Hong Kong Univ Sci & Technol Guangzhou, Hong Kong, Peoples R China
[2] Beihang Univ, Beijing, Peoples R China
[3] Stevens Inst Technol, Hoboken, NJ 07030 USA
[4] Shandong Univ, Jinan, Peoples R China
[5] Alibaba Grp, Hangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
ANNOTATION; ENSEMBLES;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Hierarchical Multi-Label Classification (HMLC) is a wellestablished problem that aims at assigning data instances to multiple classes stored in a hierarchical structure. Despite its importance, existing approaches often face two key limitations: (i) They employ dense networks to solely explore the class hierarchy as hard criterion for maintaining taxonomic consistency among predicted classes, yet without leveraging rich semantic relationships between instances and classes; (ii) They struggle to generalize in settings with deep class levels, since the mini-batches uniformly sampled from different levels ignore the varying complexities of data and result in a non-smooth model adaptation to sparse data. To mitigate these issues, we present a Self-Paced Unified Representation (SPUR) learning framework, which focuses on the interplay between instance and classes to flexibly organize the training process of HMLC algorithms. Our framework consists of two lightweight encoders designed to capture the semantics of input features and the topological information of the class hierarchy. These encoders generate unified embeddings of instances and class hierarchy, which enable SPUR to exploit semantic dependencies between them and produce predictions in line with taxonomic constraints. Furthermore, we introduce a dynamic hardness measurement strategy that considers both class hierarchy and instance features to estimate the learning difficulty of each instance. This strategy is achieved by incorporating the propagation loss obtained at each hierarchical level, allowing for a more comprehensive assessment of learning complexity. Extensive experiments on several empirical benchmarks demonstrate the effectiveness and efficiency of SPUR compared to state-of-the-art methods, especially in scenarios with missing features.
引用
收藏
页码:16623 / 16632
页数:10
相关论文
共 50 条
  • [41] Feature Selection for Hierarchical Multi-label Classification
    da Silva, Luan V. M.
    Cerri, Ricardo
    ADVANCES IN INTELLIGENT DATA ANALYSIS XIX, IDA 2021, 2021, 12695 : 196 - 208
  • [42] Evaluating Extreme Hierarchical Multi-label Classification
    Amigo, Enrique
    Delgado, Agustin D.
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 5809 - 5819
  • [43] Effects of the hierarchy in hierarchical, multi-label classification
    Daisey, Katie
    Brown, Steven D.
    CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS, 2020, 207
  • [44] Multi-label classification using hierarchical embedding
    Kumar, Vikas
    Pujari, Arun K.
    Padmanabhan, Vineet
    Sahu, Sandeep Kumar
    Kagita, Venkateswara Rao
    EXPERT SYSTEMS WITH APPLICATIONS, 2018, 91 : 263 - 269
  • [45] Decision trees for hierarchical multi-label classification
    Celine Vens
    Jan Struyf
    Leander Schietgat
    Sašo Džeroski
    Hendrik Blockeel
    Machine Learning, 2008, 73 : 185 - 214
  • [46] Decision trees for hierarchical multi-label classification
    Vens, Celine
    Struyf, Jan
    Schietgat, Leander
    Dzeroski, Saso
    Blockeel, Hendrik
    MACHINE LEARNING, 2008, 73 (02) : 185 - 214
  • [47] Hyperbolic Embeddings for Hierarchical Multi-label Classification
    Tomaz, Stepisnik
    Kocev, Dragi
    FOUNDATIONS OF INTELLIGENT SYSTEMS (ISMIS 2020), 2020, 12117 : 66 - 76
  • [48] Metric Learning for Multi-label Classification
    Brighi, Marco
    Franco, Annalisa
    Maio, Dario
    STRUCTURAL, SYNTACTIC, AND STATISTICAL PATTERN RECOGNITION, S+SSPR 2020, 2021, 12644 : 24 - 33
  • [49] Hyperspherical Learning in Multi-Label Classification
    Ke, Bo
    Zhu, Yunquan
    Li, Mengtian
    Shu, Xiujun
    Qiao, Ruizhi
    Ren, Bo
    COMPUTER VISION, ECCV 2022, PT XXV, 2022, 13685 : 38 - 55
  • [50] Compact learning for multi-label classification
    Lv, Jiaqi
    Wu, Tianran
    Peng, Chenglun
    Liu, Yunpeng
    Xu, Ning
    Geng, Xin
    PATTERN RECOGNITION, 2021, 113