DCPE Co-Training: Co-Training Based on Diversity of Class Probability Estimation

被引:0
|
作者
Xu, Jin [1 ]
He, Haibo [2 ]
Man, Hong [1 ]
机构
[1] Stevens Inst Technol, Dept Elect & Comp Engn, Hoboken, NJ 07030 USA
[2] Univ Rhode Isl, Dept Elect Comp & Biomed Engn, Kingston, RI 02881 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Co-training is a semi-supervised learning technique used to recover the unlabeled data based on two base learners. The normal co-training approaches use the most confidently recovered unlabeled data to augment the training data. In this paper, we investigate the co-training approaches with a focus on the diversity issue and propose the diversity of class probability estimation (DCPE) co-training approach. The key idea of the DCPE co-training method is to use DCPE between two base learners to choose the recovered unlabeled data. The results are compared with classic co-training, tri-training and self training methods. Our experimental study based on the UCI benchmark data sets shows that the DCPE co-training is robust and efficient in the classification.
引用
收藏
页数:7
相关论文
共 50 条
  • [1] DCPE co-training for classification
    Xu, Jin
    He, Haibo
    Man, Hong
    [J]. NEUROCOMPUTING, 2012, 86 : 75 - 85
  • [2] Bayesian Co-Training
    Yu, Shipeng
    Krishnapuram, Balaji
    Rosales, Romer
    Rao, R. Bharat
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2011, 12 : 2649 - 2680
  • [3] Improve the performance of co-training by committee with refinement of class probability estimations
    Wang, Shuang
    Wu, Linsheng
    Jiao, Licheng
    Liu, Hongying
    [J]. NEUROCOMPUTING, 2014, 136 : 30 - 40
  • [4] ROBUST CO-TRAINING
    Sun, Shiliang
    Jin, Feng
    [J]. INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2011, 25 (07) : 1113 - 1126
  • [5] Disagreement-Based Co-Training
    Tanha, Jafar
    van Someren, Maarten
    Afsarmanesh, Hamideh
    [J]. 2011 23RD IEEE INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2011), 2011, : 803 - 810
  • [6] Co-training for Policy Learning
    Song, Jialin
    Lanka, Ravi
    Yue, Yisong
    Ono, Masahiro
    [J]. 35TH UNCERTAINTY IN ARTIFICIAL INTELLIGENCE CONFERENCE (UAI 2019), 2020, 115 : 1191 - 1201
  • [7] A review of research on co-training
    Ning, Xin
    Wang, Xinran
    Xu, Shaohui
    Cai, Weiwei
    Zhang, Liping
    Yu, Lina
    Li, Wenfa
    [J]. CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2023, 35 (18):
  • [8] On Co-training Style Algorithms
    Dong, Cailing
    Yin, Yilong
    Guo, Xinjian
    Yang, Gongping
    Zhou, Guangtong
    [J]. ICNC 2008: FOURTH INTERNATIONAL CONFERENCE ON NATURAL COMPUTATION, VOL 7, PROCEEDINGS, 2008, : 196 - 201
  • [9] Co-training with Credal Models
    Soullard, Yann
    Destercke, Sebastien
    Thouvenin, Indira
    [J]. ARTIFICIAL NEURAL NETWORKS IN PATTERN RECOGNITION, 2016, 9896 : 92 - 104
  • [10] LAPLACIAN REGULARIZED CO-TRAINING
    Li Yang
    Liu Weifeng
    Wang Yanjiang
    [J]. 2014 12TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING (ICSP), 2014, : 1408 - 1412