Multi-task Learning Based Online Dialogic Instruction Detection with Pre-trained Language Models

被引:3
|
作者
Hao, Yang [1 ]
Li, Hang [1 ]
Ding, Wenbiao [1 ]
Wu, Zhongqin [1 ]
Tang, Jiliang [2 ]
Luckin, Rose [3 ]
Liu, Zitao [1 ]
机构
[1] TAL Educ Grp, Beijing, Peoples R China
[2] Michigan State Univ, Data Sci & Engn Lab, E Lansing, MI 48824 USA
[3] UCL Knowledge Lab, London, England
基金
国家重点研发计划;
关键词
Dialogic instruction; Multi-task learning; Pre-trained language model; Hard example mining;
D O I
10.1007/978-3-030-78270-2_33
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work, we study computational approaches to detect online dialogic instructions, which are widely used to help students understand learning materials, and build effective study habits. This task is rather challenging due to the widely-varying quality and pedagogical styles of dialogic instructions. To address these challenges, we utilize pre-trained language models, and propose a multi-task paradigm which enhances the ability to distinguish instances of different classes by enlarging the margin between categories via contrastive loss. Furthermore, we design a strategy to fully exploit the misclassified examples during the training stage. Extensive experiments on a real-world online educational data set demonstrate that our approach achieves superior performance compared to representative baselines. To encourage reproducible results, we make our implementation online available at https:// github.com/AIED2021/multitask-dialogic- instruction.
引用
收藏
页码:183 / 189
页数:7
相关论文
共 50 条
  • [1] Multi-task Learning based Pre-trained Language Model for Code Completion
    Liu, Fang
    Li, Ge
    Zhao, Yunfei
    Jin, Zhi
    [J]. 2020 35TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING (ASE 2020), 2020, : 473 - 485
  • [2] Multi-task Active Learning for Pre-trained Transformer-based Models
    Rotman, Guy
    Reichart, Roi
    [J]. TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2022, 10 : 1209 - 1228
  • [3] Enhancing Pre-trained Language Representation for Multi-Task Learning of Scientific Summarization
    Jia, Ruipeng
    Cao, Yannan
    Fang, Fang
    Li, Jinpeng
    Liu, Yanbing
    Yin, Pengfei
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [4] Drug knowledge discovery via multi-task learning and pre-trained models
    Li, Dongfang
    Xiong, Ying
    Hu, Baotian
    Tang, Buzhou
    Peng, Weihua
    Chen, Qingcai
    [J]. BMC MEDICAL INFORMATICS AND DECISION MAKING, 2021, 21 (SUPPL 9)
  • [5] Drug knowledge discovery via multi-task learning and pre-trained models
    Dongfang Li
    Ying Xiong
    Baotian Hu
    Buzhou Tang
    Weihua Peng
    Qingcai Chen
    [J]. BMC Medical Informatics and Decision Making, 21
  • [6] PTMB: An online satellite task scheduling framework based on pre-trained Markov decision process for multi-task scenario
    Li, Guohao
    Li, Xuefei
    Li, Jing
    Chen, Jia
    Shen, Xin
    [J]. KNOWLEDGE-BASED SYSTEMS, 2024, 284
  • [7] MTLink: Adaptive multi-task learning based pre-trained language model for traceability link recovery between issues and commits
    Deng, Yang
    Wang, Bangchao
    Zhu, Qiang
    Liu, Junping
    Kuang, Jiewen
    Li, Xingfu
    [J]. JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2024, 36 (02)
  • [8] MCM: A Multi-task Pre-trained Customer Model for Personalization
    Luo, Rui
    Wang, Tianxin
    Deng, Jingyuan
    Wan, Peng
    [J]. PROCEEDINGS OF THE 17TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2023, 2023, : 637 - 639
  • [9] Understanding Online Attitudes with Pre-Trained Language Models
    Power, William
    Obradovic, Zoran
    [J]. PROCEEDINGS OF THE 2023 IEEE/ACM INTERNATIONAL CONFERENCE ON ADVANCES IN SOCIAL NETWORKS ANALYSIS AND MINING, ASONAM 2023, 2023, : 745 - 752
  • [10] MtArtGPT: A Multi-task Art Generation System with Pre-Trained Transformer
    Jin C.
    Zhu R.
    Zhu Z.
    Yang L.
    Yang M.
    Luo J.
    [J]. IEEE Transactions on Circuits and Systems for Video Technology, 2024, 34 (08) : 1 - 1