On semi-supervised multiple representation behavior learning

被引:2
|
作者
Lu, Ruqian [1 ,2 ]
Hou, Shengluan [2 ,3 ]
机构
[1] Chinese Acad Sci, Acad Math & Syst Sci, Key Lab MADIS, Beijing 100190, Peoples R China
[2] Chinese Acad Sci, Inst Comp Technol, Key Lab Intelligent Informat Proc, Beijing 100190, Peoples R China
[3] Univ Chinese Acad Sci, Beijing 100049, Peoples R China
基金
中国国家自然科学基金;
关键词
Semi-supervised learning; Semi-supervised multiple representation behavior learning; Semi-supervised grammar learning;
D O I
10.1016/j.jocs.2020.101111
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Since Shahshahani and Landgrebe published their seminal paper (Shahshahani and Landgrebe, 1994) [1] in 1994, the study on semi-supervised learning (SSL) developed fast and has already become one of the main streams of machine learning (ML) research. However, there are still some areas or problems where the capability of SSL remains seriously limited. Firstly, according to our observation, almost all SSL researches are towards classification, regression or clustering tasks. More difficult tasks such as planning, construction, summarization, argumentation, etc. are rarely seen studied with SSL methods. Secondly, most SSL researches use only simple labels (e.g. a string, an identifier, a numerical value, etc.) to mark the text data. It is difficult to use such simple labels to characterize data with delicate information. This limitation might be the reason why current SSL technique is not appropriate in processing complex tasks. Thirdly, after entering the age of big data and big knowledge, SSL, like the other branches of ML, is now facing the challenge of learning big knowledge from big data. The shortage of traditional SSL as mentioned above became even more serious and we are looking forward to new technology of SSL. In this paper, we propose and discuss a novel paradigm of SSL: the semi-supervised multiple representation behavior learning (SSMRBL). It is towards matching the challenge to SSL stated above. SSMRBL should extend current SSL techniques to support complex task learning such as planning, construction, summarization, argumentation etc. In order to meet the challenge, SSMRBL introduces compound structured labels such as trees, graphs, lattices, etc. to represent complicated information of objects and tasks to be learned. Thus, to label an unlabeled datum is to construct a compound structured label for it. As a consequence, SSMRBL needs to have multiple representations. There may be one representation for compound structured labels, one for the target model which is the unification of all local models (labels), one for representing the process (behavior) of label construction, and one for the efficient computation during the learning process. This paper introduces also a typical circumstance of SSMRBL-semi-supervised grammar learning (SSGL), which learns a grammar from a set of natural language texts and then applies this grammar to parse new texts and to summarize its content. We provide also experimental results based on a variety of algorithms to show the reasonability of our ideas. (c) 2020 Published by Elsevier B.V.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Semi-Supervised Crowd Counting via Multiple Representation Learning
    Wei, Xing
    Qiu, Yunfeng
    Ma, Zhiheng
    Hong, Xiaopeng
    Gong, Yihong
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 5220 - 5230
  • [2] Learning Semi-Supervised Representation Towards a Unified Optimization Framework for Semi-Supervised Learning
    Li, Chun-Guang
    Lin, Zhouchen
    Zhang, Honggang
    Guo, Jun
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 2767 - 2775
  • [3] Semi-supervised learning by sparse representation
    Yan, Shuicheng
    Wang, Huan
    [J]. Society for Industrial and Applied Mathematics - 9th SIAM International Conference on Data Mining 2009, Proceedings in Applied Mathematics, 2009, 2 : 788 - 797
  • [4] Semi-Supervised Learning via Regularized Boosting Working on Multiple Semi-Supervised Assumptions
    Chen, Ke
    Wang, Shihai
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2011, 33 (01) : 129 - 143
  • [5] Semi-supervised metric learning via topology preserving multiple semi-supervised assumptions
    Wang, Qianying
    Yuen, Pong C.
    Feng, Guocan
    [J]. PATTERN RECOGNITION, 2013, 46 (09) : 2576 - 2587
  • [6] FairSwiRL: fair semi-supervised classification with representation learning
    Shuyi Yang
    Mattia Cerrato
    Dino Ienco
    Ruggero G. Pensa
    Roberto Esposito
    [J]. Machine Learning, 2023, 112 : 3051 - 3076
  • [7] SEMI-SUPERVISED METRIC LEARNING VIA TOPOLOGY REPRESENTATION
    Wang, Q. Y.
    Yuen, P. C.
    Feng, G. C.
    [J]. 2012 PROCEEDINGS OF THE 20TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2012, : 639 - 643
  • [8] Improved Semi-Supervised Learning with Multiple Graphs
    Viswanathan, Krishnamurthy
    Sachdeva, Sushant
    Tomkins, Andrew
    Ravi, Sujith
    [J]. 22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89
  • [9] HOLISTIC SEMI-SUPERVISED APPROACHES FOR EEG REPRESENTATION LEARNING
    Zhang, Guangyi
    Etemad, Ali
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 1241 - 1245
  • [10] Semi-Supervised Visual Representation Learning for Fashion Compatibility
    Revanur, Ambareesh
    Kumar, Vijay
    Sharma, Deepthi
    [J]. 15TH ACM CONFERENCE ON RECOMMENDER SYSTEMS (RECSYS 2021), 2021, : 463 - 472