Multi-view multi-label active learning with conditional Bernoulli mixtures

被引:3
|
作者
Zhao, Jing [1 ]
Qiu, Zengyu [1 ]
Sun, Shiliang [1 ]
机构
[1] East China Normal Univ, Sch Comp Sci & Technol, 3663 North Zhongshan Rd, Shanghai 200062, Peoples R China
基金
中国国家自然科学基金;
关键词
Active learning; Multi-label classification; Multi-view learning;
D O I
10.1007/s13042-021-01467-6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-label classification is very common in practical applications. Compared with multi-class classification, multi-label classification has larger label space and thus the annotations of multi-label instances are typically more time-consuming. It is significant to develop active learning methods for multi-label classification problems. In addition, multi-view learning is more and more popular, which treats data from different views discriminatively and integrates information from all the views effectively. Introducing multi-view methods into active learning can further enhance its performance when processing multi-view data. In this paper, we propose multi-view active learning methods for multi-label classifications. The proposed methods are developed based on the conditional Bernoulli mixture model which is an effective model for multi-label classification. For making active selection criteria, we consider selecting informative and representative instances. From the informative perspective, least confidence and entropy of the predicting results are employed. From the representative perspective, clustering results on the unlabeled data are exploited. Particularly for multi-view active learning, novel multi-view prediction methods are designed to make final prediction and view consistency is additionally considered to make selection criteria. Finally, we demonstrate the effectiveness of the proposed methods through experiments on real-world datasets.
引用
收藏
页码:1589 / 1601
页数:13
相关论文
共 50 条
  • [11] Multi-View Multi-Label Learning With View-Label-Specific Features
    Huang, Jun
    Qu, Xiwen
    Li, Guorong
    Qin, Feng
    Zheng, Xiao
    Huang, Qingming
    [J]. IEEE ACCESS, 2019, 7 : 100979 - 100992
  • [12] Tensor based Multi-View Label Enhancement for Multi-Label Learning
    Zhang, Fangwen
    Jia, Xiuyi
    Li, Weiwei
    [J]. PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 2369 - 2375
  • [13] Multi-view multi-label learning with view feature attention allocation
    Cheng, Yusheng
    Li, Qingyan
    Wang, Yibin
    Zheng, Weijie
    [J]. NEUROCOMPUTING, 2022, 501 : 857 - 874
  • [14] MULTI-VIEW METRIC LEARNING FOR MULTI-LABEL IMAGE CLASSIFICATION
    Zhang, Mengying
    Li, Changsheng
    Wang, Xiangfeng
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 2134 - 2138
  • [15] Multi-view Multi-label Learning with Shared Features Inconsistency
    Li, Qingyan
    Cheng, Yusheng
    [J]. NEURAL PROCESSING LETTERS, 2024, 56 (03)
  • [16] Multi-view Multi-label Learning with Incomplete Views and Labels
    Changming Zhu
    Lin Ma
    [J]. SN Computer Science, 2022, 3 (1)
  • [17] Multi-view multi-label learning with high-order label correlation
    Liu, Bo
    Li, Weibin
    Xiao, Yanshan
    Chen, Xiaodong
    Liu, Laiwang
    Liu, Changdong
    Wang, Kai
    Sun, Peng
    [J]. INFORMATION SCIENCES, 2023, 624 : 165 - 184
  • [18] Label driven latent subspace learning for multi-view multi-label classification
    Liu, Wei
    Yuan, Jiazheng
    Lyu, Gengyu
    Feng, Songhe
    [J]. APPLIED INTELLIGENCE, 2023, 53 (04) : 3850 - 3863
  • [19] Label driven latent subspace learning for multi-view multi-label classification
    Wei Liu
    Jiazheng Yuan
    Gengyu Lyu
    Songhe Feng
    [J]. Applied Intelligence, 2023, 53 : 3850 - 3863
  • [20] Multi-View Multi-Label Learning with View-Specific Information Extraction
    Wu, Xuan
    Chen, Qing-Guo
    Hu, Yao
    Wang, Dengbao
    Chang, Xiaodong
    Wang, Xiaobo
    Zhang, Min-Ling
    [J]. PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 3884 - 3890