DialogueBERT: A Self-Supervised Learning based Dialogue Pre-training Encoder

被引:8
|
作者
Zhang, Zhenyu [1 ]
Guo, Tao [2 ]
Chen, Meng [3 ]
机构
[1] JD AI, Chengdu, Peoples R China
[2] Xiaoduo AI, Chengdu, Peoples R China
[3] JD AI, Beijing, Peoples R China
关键词
Dialogue Pre-training Model; Dialogue Representation; Intent Recognition; Emotion Recognition; Named Entity Recognition;
D O I
10.1145/3459637.3482085
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the rapid development of artificial intelligence, conversational bots have became prevalent in mainstream E-commerce platforms, which can provide convenient customer service timely. To satisfy the user, the conversational bots need to understand the user's intention, detect the user's emotion, and extract the key entities from the conversational utterances. However, understanding dialogues is regarded as a very challenging task. Different from common language understanding, utterances in dialogues appear alternately from different roles and are usually organized as hierarchical structures. To facilitate the understanding of dialogues, in this paper, we propose a novel contextual dialogue encoder (i.e. DialogueBERT) based on the popular pre-trained language model BERT. Five self-supervised learning pre-training tasks are devised for learning the particularity of dialouge utterances. Four different input embeddings are integrated to catch the relationship between utterances, including turn embedding, role embedding, token embedding and position embedding. DialogueBERT was pre-trained with 70 million dialogues in real scenario, and then fine-tuned in three different downstream dialogue understanding tasks. Experimental results show that DialogueBERT achieves exciting results with 88.63% accuracy for intent recognition, 94.25% accuracy for emotion recognition and 97.04% F1 score for named entity recognition, which outperforms several strong baselines by a large margin.
引用
收藏
页码:3647 / 3651
页数:5
相关论文
共 50 条
  • [31] Self-supervised graph neural network with pre-training generative learning for recommendation systems
    Xin Min
    Wei Li
    Jinzhao Yang
    Weidong Xie
    Dazhe Zhao
    [J]. Scientific Reports, 12
  • [32] Self-supervised graph neural network with pre-training generative learning for recommendation systems
    Min, Xin
    Li, Wei
    Yang, Jinzhao
    Xie, Weidong
    Zhao, Dazhe
    [J]. SCIENTIFIC REPORTS, 2022, 12 (01)
  • [33] LPCL: Localized prominence contrastive learning for self-supervised dense visual pre-training
    Chen, Zihan
    Zhu, Hongyuan
    Cheng, Hao
    Mi, Siya
    Zhang, Yu
    Geng, Xin
    [J]. PATTERN RECOGNITION, 2023, 135
  • [34] Self-Supervised Dialogue Learning
    Wu, Jiawei
    Wang, Xin
    Wang, William Yang
    [J]. 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 3857 - 3867
  • [35] Masked self-supervised pre-training model for EEG-based emotion recognition
    Hu, Xinrong
    Chen, Yu
    Yan, Jinlin
    Wu, Yuan
    Ding, Lei
    Xu, Jin
    Cheng, Jun
    [J]. COMPUTATIONAL INTELLIGENCE, 2024, 40 (03)
  • [36] Complementary Mask Self-Supervised Pre-training Based on Teacher-Student Network
    Ye, Shaoxiong
    Huang, Jing
    Zhu, Lifu
    [J]. 2023 3RD ASIA-PACIFIC CONFERENCE ON COMMUNICATIONS TECHNOLOGY AND COMPUTER SCIENCE, ACCTCS, 2023, : 199 - 206
  • [37] Token Boosting for Robust Self-Supervised Visual Transformer Pre-training
    Li, Tianjiao
    Foo, Lin Geng
    Hu, Ping
    Shang, Xindi
    Rahmani, Hossein
    Yuan, Zehuan
    Liu, Jun
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 24027 - 24038
  • [38] SslTransT: Self-supervised pre-training visual object tracking with Transformers
    Cai, Yannan
    Tan, Ke
    Wei, Zhenzhong
    [J]. OPTICS COMMUNICATIONS, 2024, 557
  • [39] GUIDED CONTRASTIVE SELF-SUPERVISED PRE-TRAINING FOR AUTOMATIC SPEECH RECOGNITION
    Khare, Aparna
    Wu, Minhua
    Bhati, Saurabhchand
    Droppo, Jasha
    Maas, Roland
    [J]. 2022 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP, SLT, 2022, : 174 - 181
  • [40] Individualized Stress Mobile Sensing Using Self-Supervised Pre-Training
    Islam, Tanvir
    Washington, Peter
    [J]. APPLIED SCIENCES-BASEL, 2023, 13 (21):