A reinforcement learning approach for single redundant view co-training text classification

被引:2
|
作者
Paiva, Bruno B. M. [1 ]
Nascimento, Erickson R. [1 ]
Goncalves, Marcos Andre [1 ]
Belem, Fabiano [1 ]
机构
[1] Univ Fed Minas Gerais, Dept Comp Sci, Rua Reitor Pires Albuquerque, BR-31270901 Belo Horizonte, MG, Brazil
关键词
Semi-supervised learning; Reinforcement learning; Meta learning;
D O I
10.1016/j.ins.2022.09.065
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We tackle the problem of learning classification models with very small amounts of labeled data (e.g., less than 10% of the dataset) by introducing a novel Single View Co-Training strategy supported by Reinforcement Learning (CoRL). CoRL is a novel semi-supervised learning framework that can be used with a single view (representation). Differently from traditional co-training that requires at least two sufficient and independent data views (e.g., modes), our solution is applicable to any kind of data. Our approach exploits a rein-forcement learning (RL) paradigm as a strategy to relax the view independence assumption, using a stronger iterative agent that builds more precise combined decision class bound-aries. Our experimental evaluation with four popular textual benchmarks demonstrates that CoRL can produce better classifiers than confidence-based co-training methods, while producing high effectiveness in comparison with the state-of-the-art in semi-supervised learning. In our experiments, CoRL reduced the labeling effort by more than 80% with no losses in classification effectiveness, outperforming state-of-the-art baselines, including methods based on neural networks, with gains of up to 96% against some of the best competitors.(c) 2022 Elsevier Inc. All rights reserved.
引用
收藏
页码:24 / 38
页数:15
相关论文
共 50 条
  • [31] Leveraging Text Classification by Co-training with Bidirectional Language Models - A Novel Hybrid Approach and Its Application for a German Bank
    Graef, Roland
    INNOVATION THROUGH INFORMATION SYSTEMS, VOL II: A COLLECTION OF LATEST RESEARCH ON TECHNOLOGY ISSUES, 2021, 47 : 216 - 231
  • [32] Combining Transfer Learning and Co-training for Student Classification in an Academic Credit System
    Nguyen Duy Hoang
    Vo Thi Ngoc Chau
    Nguyen Hua Phung
    2016 IEEE RIVF INTERNATIONAL CONFERENCE ON COMPUTING & COMMUNICATION TECHNOLOGIES, RESEARCH, INNOVATION, AND VISION FOR THE FUTURE (RIVF), 2016, : 55 - 60
  • [33] Co-training for Demographic Classification Using Deep Learning from Label Proportions
    Ardehaly, Ehsan Mohammady
    Culotta, Aron
    2017 17TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS (ICDMW 2017), 2017, : 1017 - 1024
  • [34] Question classification based on co-training style semi-supervised learning
    Yu, Zhengtao
    Su, Lei
    Li, Lina
    Zhao, Quan
    Mao, Cunli
    Guo, Jianyi
    PATTERN RECOGNITION LETTERS, 2010, 31 (13) : 1975 - 1980
  • [35] Auto-encoder Based Co-training Multi-view Representation Learning
    Lu, Run-kun
    Liu, Jian-wei
    Wang, Yuan-fang
    Xie, Hao-jie
    Zuo, Xin
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2019, PT III, 2019, 11441 : 119 - 130
  • [36] Multi-view Co-training for microRNA Prediction
    Hassani, Mohsen Sheikh
    Green, James R.
    SCIENTIFIC REPORTS, 2019, 9 (1)
  • [37] Multi-view Co-training for microRNA Prediction
    Mohsen Sheikh Hassani
    James R. Green
    Scientific Reports, 9
  • [38] Random Relevant and Non-redundant Feature Subspaces for Co-training
    Yaslan, Yusuf
    Cataltepe, Zehra
    INTELLIGENT DATA ENGINEERING AND AUTOMATED LEARNING, PROCEEDINGS, 2009, 5788 : 679 - 686
  • [39] CoRec: A Co-Training Approach for Recommender Systems
    da Costa, Arthur F.
    Manzato, Marcelo G.
    Campello, Ricardo J. G. B.
    33RD ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, 2018, : 696 - 703
  • [40] Boosting Supervised Learning Performance with Co-training
    Du, Xinnan
    Zhang, William
    Alvarez, Jose M.
    2021 32ND IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2021, : 540 - 545