A boosting framework for positive-unlabeled learning

被引:0
|
作者
Zhao, Yawen [1 ]
Zhang, Mingzhe [1 ]
Zhang, Chenhao [1 ]
Chen, Weitong [2 ]
Ye, Nan [1 ]
Xu, Miao [1 ]
机构
[1] Univ Queensland, Brisbane, Qld, Australia
[2] Univ Adelaide, Adelaide, SA, Australia
基金
澳大利亚研究理事会;
关键词
Boosting; Weakly supervised learning; PU learning; Ensemble;
D O I
10.1007/s11222-024-10529-y
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Positive-unlabeled (PU) learning deals with binary classification problems where only positive and unlabeled data are available. In this paper, we introduce a novel boosting framework, Adaptive PU (AdaPU), for learning from PU data. AdaPU builds an ensemble of weak classifiers using weak learners tailored to PU data. We propose two main approaches for learning the weak classifiers: a direct loss minimization approach that learns weak classifiers to greedily minimize PU-data-based estimates of the exponential loss, specifically, the unbiased PU estimate and the non-negative PU estimate; and a constrained loss minimization approach that learns weak classifiers to greedily minimize the unbiased PU estimate of the exponential loss, subject to regularization constraints. The direct loss minimization approach, while natural and simple, often yields weak learners prone to overfitting or leads to computationally expensive algorithms. On the other hand, the constrained loss minimization approach can effectively alleviate overfitting and allow the design of efficient weak learners. In particular, we propose a tailored weak learner for the simple class of decision stumps, or one-level decision trees, which interestingly demonstrates strong performance in comparison to various other weak classifiers. Furthermore, we provide several theoretical results on the performance of AdaPU. We performed extensive experiments to evaluate the variants of AdaPU and various baseline algorithms. Our results demonstrate the effectiveness of the constrained loss minimization approach.
引用
收藏
页数:22
相关论文
共 50 条
  • [21] Recovering True Classifier Performance in Positive-Unlabeled Learning
    Jain, Shantanu
    White, Martha
    Radivojac, Predrag
    THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 2066 - 2072
  • [22] Spotting Fake Reviews using Positive-Unlabeled Learning
    Li, Huayi
    Liu, Bing
    Mukherjee, Arjun
    Shao, Jidong
    COMPUTACION Y SISTEMAS, 2014, 18 (03): : 467 - 475
  • [23] Incorporating Semi-Supervised and Positive-Unlabeled Learning for Boosting Full Reference Image Quality Assessment
    Cao, Yue
    Wan, Zhaolin
    Ren, Dongwei
    Yan, Zifei
    Zuo, Wangmeng
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 5841 - 5851
  • [24] Theoretical Comparisons of Positive-Unlabeled Learning against Positive-Negative Learning
    Niu, Gang
    du Plessis, Marthinus C.
    Sakai, Tomoya
    Ma, Yao
    Sugiyama, Masashi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [25] Investigating Active Positive-Unlabeled Learning with Deep Networks
    Han, Kun
    Chen, Weitong
    Xu, Miao
    AI 2021: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, 13151 : 607 - 618
  • [26] Bootstrap Latent Prototypes for graph positive-unlabeled learning
    Liang, Chunquan
    Tian, Yi
    Zhao, Dongmin
    Li, Mei
    Pan, Shirui
    Zhang, Hongming
    Wei, Jicheng
    INFORMATION FUSION, 2024, 112
  • [27] Positive-Unlabeled Compression on the Cloud
    Xu, Yixing
    Wang, Yunhe
    Chen, Hanting
    Han, Kai
    Xu, Chunjing
    Tao, Dacheng
    Xu, Chang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [28] Positive-Unlabeled Domain Adaptation
    Sonntag, Jonas
    Behrens, Gunnar
    Schmidt-Thieme, Lars
    2022 IEEE 9TH INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS (DSAA), 2022, : 66 - 75
  • [29] GradPU: Positive-Unlabeled Learning via Gradient Penalty and Positive Upweighting
    Dai, Songmin
    Li, Xiaoqiang
    Zhou, Yue
    Ye, Xichen
    Liu, Tong
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 6, 2023, : 7296 - +
  • [30] An ensemble learning framework for potential miRNA-disease association prediction with positive-unlabeled data
    Wu, Yao
    Zhu, Donghua
    Wang, Xuefeng
    Zhang, Shuo
    COMPUTATIONAL BIOLOGY AND CHEMISTRY, 2021, 95