Using Credal C4.5 for Calibrated Label Ranking in Multi-Label Classification

被引:7
|
作者
Moral-Garcia, Serafin [1 ]
Mantas, Carlos J. [1 ]
Castellano, Javier G. [1 ]
Abellan, Joaquin [1 ]
机构
[1] Univ Granada, Dept Comp Sci & Artificial Intelligence, Granada, Spain
关键词
Multi-Label Classification; Credal C4; 5; Calibrated Label Ranking; C4; Label noise; Imprecise probabilities; IMPRECISE PROBABILITIES; DECISION TREES; CLASSIFIERS; PREDICTION; ENSEMBLES; DATASETS;
D O I
10.1016/j.ijar.2022.05.005
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-Label Classification (MLC) assumes that each instance belongs to a set of labels, unlike traditional classification, where each instance corresponds to a unique value of a class variable. Calibrated Label Ranking (CLR) is an MLC algorithm that determines a ranking of labels for a given instance by considering a binary classifier for each pair of labels. In this way, it exploits pairwise label correlations. Furthermore, CLR alleviates the class-imbalance problem that usually arises in MLC because, in this domain, very few instances often belong to a label. In order to build the binary classifiers in CLR, it is required to employ a standard classification algorithm. The Decision Tree method C4.5 has been widely used in this field. In this research, we show that a version of C4.5 based on imprecise probabilities recently proposed, known as Credal C4.5, is more appropriate than C4.5 to handle the binary classification tasks in CLR. Experimental results reveal that Credal C4.5 outperforms C4.5 when both methods are used in CLR and that the difference is more statistically significant as the label noise level is higher.
引用
收藏
页码:60 / 77
页数:18
相关论文
共 50 条
  • [21] Predicting Label Distribution from Multi-label Ranking
    Lu, Yunan
    Jia, Xiuyi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [22] LabCor: Multi-label classification using a label correction strategy
    Wu, Chengkai
    Zhou, Tianshu
    Wu, Junya
    Tian, Yu
    Li, Jingsong
    APPLIED INTELLIGENCE, 2022, 52 (05) : 5414 - 5434
  • [23] LabCor: Multi-label classification using a label correction strategy
    Chengkai Wu
    Tianshu Zhou
    Junya Wu
    Yu Tian
    Jingsong Li
    Applied Intelligence, 2022, 52 : 5414 - 5434
  • [24] Multi-label Classification Using Random Label Subset Selections
    Breskvar, Martin
    Kocev, Dragi
    Dzeroski, Saso
    DISCOVERY SCIENCE, DS 2017, 2017, 10558 : 108 - 115
  • [25] LSTM2: Multi-Label Ranking for Document Classification
    Yan, Yan
    Wang, Ying
    Gao, Wen-Chao
    Zhang, Bo-Wen
    Yang, Chun
    Yin, Xu-Cheng
    NEURAL PROCESSING LETTERS, 2018, 47 (01) : 117 - 138
  • [26] Multilabel classification via calibrated label ranking
    Fuernkranz, Johannes
    Huellermeier, Eyke
    Mencia, Eneldo Loza
    Brinker, Klaus
    MACHINE LEARNING, 2008, 73 (02) : 133 - 153
  • [27] Ranking-Based Autoencoder for Extreme Multi-label Classification
    Wang, Bingyu
    Chen, Li
    Sun, Wei
    Qin, Kechen
    Li, Kefeng
    Zhou, Hui
    2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, 2019, : 2820 - 2830
  • [28] Multilabel classification via calibrated label ranking
    Johannes Fürnkranz
    Eyke Hüllermeier
    Eneldo Loza Mencía
    Klaus Brinker
    Machine Learning, 2008, 73 : 133 - 153
  • [29] Multi-label Ranking with LSTM2 for Document Classification
    Yan, Yan
    Yin, Xu-Cheng
    Yang, Chun
    Zhang, Bo-Wen
    Hao, Hong-Wei
    PATTERN RECOGNITION (CCPR 2016), PT II, 2016, 663 : 349 - 363
  • [30] Robust label compression for multi-label classification
    Zhang, Ju-Jie
    Fang, Min
    Wu, Jin-Qiao
    Li, Xiao
    KNOWLEDGE-BASED SYSTEMS, 2016, 107 : 32 - 42