Exploring Co-Training Strategies for Opinion Detection

被引:15
|
作者
Yu, Ning [1 ]
机构
[1] Univ Kentucky, Sch Lib & Informat Sci, Lexington, KY 40506 USA
关键词
text mining; machine learning; automatic classification; SEMANTIC ORIENTATION; CLASSIFICATION;
D O I
10.1002/asi.23111
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
For the last decade or so, sentiment analysis, which aims to automatically identify opinions, polarities, or emotions from user-generated content (e.g., blogs, tweets), has attracted interest from both academic and industrial communities. Most sentiment analysis strategies fall into 2 categories: lexicon-based and corpus-based approaches. While the latter often requires sentiment-labeled data to build a machine learning model, both approaches need sentiment-labeled data for evaluation. Unfortunately, most data domains lack sufficient quantities of labeled data, especially at the subdocument level. Semisupervised learning (SSL), a machine learning technique that requires only a few labeled examples and can automatically label unlabeled data, is a promising strategy to deal with the issue of insufficient labeled data. Although previous studies have shown promising results of applying various SSL algorithms to solve a sentiment-analysis problem, co-training, an SSL algorithm, has not attracted much attention for sentiment analysis largely due to its restricted assumptions. Therefore, this study focuses on revisiting co-training in depth and discusses several co-training strategies for sentiment analysis following a loose assumption. Results suggest that co-training can be more effective than can other currently adopted SSL methods for sentiment analysis.
引用
收藏
页码:2098 / 2110
页数:13
相关论文
共 50 条
  • [1] Standard Co-training in Multiword Expression Detection
    Metin, Senem Kumova
    [J]. INTELLIGENT HUMAN COMPUTER INTERACTION, IHCI 2017, 2017, 10688 : 178 - 188
  • [2] Integrating co-training and recognition for text detection
    Wu, W
    Chen, DT
    Yang, J
    [J]. 2005 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), VOLS 1 AND 2, 2005, : 1167 - 1170
  • [3] Co-Training for On-Board Deep Object Detection
    Villalonga, Gabriel
    Pena, Antonio M. Lopez
    [J]. IEEE ACCESS, 2020, 8 (194441-194456) : 194441 - 194456
  • [4] DCPE Co-Training: Co-Training Based on Diversity of Class Probability Estimation
    Xu, Jin
    He, Haibo
    Man, Hong
    [J]. 2010 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS IJCNN 2010, 2010,
  • [5] Traffic Sign Detection Based on Co-training Method
    Fang Shengchao
    Xin Le
    Chen Yangzhou
    [J]. 2014 33RD CHINESE CONTROL CONFERENCE (CCC), 2014, : 4893 - 4898
  • [6] Bayesian Co-Training
    Yu, Shipeng
    Krishnapuram, Balaji
    Rosales, Romer
    Rao, R. Bharat
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2011, 12 : 2649 - 2680
  • [7] ROBUST CO-TRAINING
    Sun, Shiliang
    Jin, Feng
    [J]. INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2011, 25 (07) : 1113 - 1126
  • [8] Active Speaker Detection with Audio-Visual Co-training
    Chakravarty, Punarjay
    Zegers, Jeroen
    Tuytelaars, Tinne
    Van Hamme, Hugo
    [J]. ICMI'16: PROCEEDINGS OF THE 18TH ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2016, : 312 - 316
  • [9] Co-training for Policy Learning
    Song, Jialin
    Lanka, Ravi
    Yue, Yisong
    Ono, Masahiro
    [J]. 35TH UNCERTAINTY IN ARTIFICIAL INTELLIGENCE CONFERENCE (UAI 2019), 2020, 115 : 1191 - 1201
  • [10] Shadow Detection Based on Adaboost Classifiers in a Co-training Framework
    Zhao, Jie
    Kong, Suhong
    Men, Guozun
    [J]. 2011 CHINESE CONTROL AND DECISION CONFERENCE, VOLS 1-6, 2011, : 1672 - 1676