A reinforcement learning approach for single redundant view co-training text classification

被引:2
|
作者
Paiva, Bruno B. M. [1 ]
Nascimento, Erickson R. [1 ]
Goncalves, Marcos Andre [1 ]
Belem, Fabiano [1 ]
机构
[1] Univ Fed Minas Gerais, Dept Comp Sci, Rua Reitor Pires Albuquerque, BR-31270901 Belo Horizonte, MG, Brazil
关键词
Semi-supervised learning; Reinforcement learning; Meta learning;
D O I
10.1016/j.ins.2022.09.065
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We tackle the problem of learning classification models with very small amounts of labeled data (e.g., less than 10% of the dataset) by introducing a novel Single View Co-Training strategy supported by Reinforcement Learning (CoRL). CoRL is a novel semi-supervised learning framework that can be used with a single view (representation). Differently from traditional co-training that requires at least two sufficient and independent data views (e.g., modes), our solution is applicable to any kind of data. Our approach exploits a rein-forcement learning (RL) paradigm as a strategy to relax the view independence assumption, using a stronger iterative agent that builds more precise combined decision class bound-aries. Our experimental evaluation with four popular textual benchmarks demonstrates that CoRL can produce better classifiers than confidence-based co-training methods, while producing high effectiveness in comparison with the state-of-the-art in semi-supervised learning. In our experiments, CoRL reduced the labeling effort by more than 80% with no losses in classification effectiveness, outperforming state-of-the-art baselines, including methods based on neural networks, with gains of up to 96% against some of the best competitors.(c) 2022 Elsevier Inc. All rights reserved.
引用
收藏
页码:24 / 38
页数:15
相关论文
共 50 条
  • [41] Using Co-Training to Empower Active Learning
    Azad, Payam V.
    Yaslan, Yusuf
    2017 25TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2017,
  • [42] SEMI-SUPERVISED CO-TRAINING AND ACTIVE LEARNING FRAMEWORK FOR HYPERSPECTRAL IMAGE CLASSIFICATION
    Samiappan, Sathishkumar
    Moorhead, Robert J., II
    2015 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS), 2015, : 401 - 404
  • [43] Classification of Online Medical Discourse by Modified Co-training
    Alnashwan, Rana
    Sorensen, Humphrey
    O'Riordan, Adrian
    2019 IEEE FIFTH INTERNATIONAL CONFERENCE ON BIG DATA COMPUTING SERVICE AND APPLICATIONS (IEEE BIGDATASERVICE 2019), 2019, : 131 - 137
  • [44] A CO-TRAINING APPROACH TO AUTOMATIC FACE RECOGNITION
    Zhao, Xuran
    Evans, Nicholas
    Dugelay, Jean-Luc
    19TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO-2011), 2011, : 1979 - 1983
  • [45] Co-training Based on Multi-type Text Features
    Liu, Wenting
    Jing, Xiaojun
    Chen, Yaqin
    Li, Jia
    SIGNAL AND INFORMATION PROCESSING, NETWORKING AND COMPUTERS, 2018, 473 : 213 - 220
  • [46] A Meta Learning-Based Approach for Zero-Shot Co-Training
    Zaks, Guy
    Katz, Gilad
    IEEE ACCESS, 2021, 9 : 146653 - 146666
  • [47] Online traffic classification based on co-training method
    Yan, Jinghua
    Yun, Xiaochun
    Wu, Zhigang
    Luo, Hao
    Zhang, Shuzhuang
    Jin, Shuyuan
    Zhang, Zhibin
    2012 13TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED COMPUTING, APPLICATIONS, AND TECHNOLOGIES (PDCAT 2012), 2012, : 391 - 397
  • [48] Using clustering and co-training to boost classification performance
    Kyriakopoulou, Antonia
    19TH IEEE INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, VOL II, PROCEEDINGS, 2007, : 325 - 330
  • [49] Neural Co-training for Sentiment Classification with Product Attributes
    Bai, Ruirui
    Wang, Zhongqing
    Kong, Fang
    Li, Shoushan
    Zhou, Guodong
    ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2020, 19 (05)
  • [50] Co-Training for Classification of Live or Studio Music Recordings
    Auguin, Nicolas
    Fung, Pascale
    LREC 2014 - NINTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2014, : 3650 - 3653