Since Shahshahani and Landgrebe published their seminal paper (Shahshahani and Landgrebe, 1994) [1] in 1994, the study on semi-supervised learning (SSL) developed fast and has already become one of the main streams of machine learning (ML) research. However, there are still some areas or problems where the capability of SSL remains seriously limited. Firstly, according to our observation, almost all SSL researches are towards classification, regression or clustering tasks. More difficult tasks such as planning, construction, summarization, argumentation, etc. are rarely seen studied with SSL methods. Secondly, most SSL researches use only simple labels (e.g. a string, an identifier, a numerical value, etc.) to mark the text data. It is difficult to use such simple labels to characterize data with delicate information. This limitation might be the reason why current SSL technique is not appropriate in processing complex tasks. Thirdly, after entering the age of big data and big knowledge, SSL, like the other branches of ML, is now facing the challenge of learning big knowledge from big data. The shortage of traditional SSL as mentioned above became even more serious and we are looking forward to new technology of SSL. In this paper, we propose and discuss a novel paradigm of SSL: the semi-supervised multiple representation behavior learning (SSMRBL). It is towards matching the challenge to SSL stated above. SSMRBL should extend current SSL techniques to support complex task learning such as planning, construction, summarization, argumentation etc. In order to meet the challenge, SSMRBL introduces compound structured labels such as trees, graphs, lattices, etc. to represent complicated information of objects and tasks to be learned. Thus, to label an unlabeled datum is to construct a compound structured label for it. As a consequence, SSMRBL needs to have multiple representations. There may be one representation for compound structured labels, one for the target model which is the unification of all local models (labels), one for representing the process (behavior) of label construction, and one for the efficient computation during the learning process. This paper introduces also a typical circumstance of SSMRBL-semi-supervised grammar learning (SSGL), which learns a grammar from a set of natural language texts and then applies this grammar to parse new texts and to summarize its content. We provide also experimental results based on a variety of algorithms to show the reasonability of our ideas. (c) 2020 Published by Elsevier B.V.