DialogueBERT: A Self-Supervised Learning based Dialogue Pre-training Encoder

被引:8
|
作者
Zhang, Zhenyu [1 ]
Guo, Tao [2 ]
Chen, Meng [3 ]
机构
[1] JD AI, Chengdu, Peoples R China
[2] Xiaoduo AI, Chengdu, Peoples R China
[3] JD AI, Beijing, Peoples R China
关键词
Dialogue Pre-training Model; Dialogue Representation; Intent Recognition; Emotion Recognition; Named Entity Recognition;
D O I
10.1145/3459637.3482085
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the rapid development of artificial intelligence, conversational bots have became prevalent in mainstream E-commerce platforms, which can provide convenient customer service timely. To satisfy the user, the conversational bots need to understand the user's intention, detect the user's emotion, and extract the key entities from the conversational utterances. However, understanding dialogues is regarded as a very challenging task. Different from common language understanding, utterances in dialogues appear alternately from different roles and are usually organized as hierarchical structures. To facilitate the understanding of dialogues, in this paper, we propose a novel contextual dialogue encoder (i.e. DialogueBERT) based on the popular pre-trained language model BERT. Five self-supervised learning pre-training tasks are devised for learning the particularity of dialouge utterances. Four different input embeddings are integrated to catch the relationship between utterances, including turn embedding, role embedding, token embedding and position embedding. DialogueBERT was pre-trained with 70 million dialogues in real scenario, and then fine-tuned in three different downstream dialogue understanding tasks. Experimental results show that DialogueBERT achieves exciting results with 88.63% accuracy for intent recognition, 94.25% accuracy for emotion recognition and 97.04% F1 score for named entity recognition, which outperforms several strong baselines by a large margin.
引用
收藏
页码:3647 / 3651
页数:5
相关论文
共 50 条
  • [1] Joint Encoder-Decoder Self-Supervised Pre-training for ASR
    Arunkumar, A.
    Umesh, S.
    [J]. INTERSPEECH 2022, 2022, : 3418 - 3422
  • [2] Self-supervised ECG pre-training
    Liu, Han
    Zhao, Zhenbo
    She, Qiang
    [J]. BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2021, 70
  • [3] Dense Contrastive Learning for Self-Supervised Visual Pre-Training
    Wang, Xinlong
    Zhang, Rufeng
    Shen, Chunhua
    Kong, Tao
    Li, Lei
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 3023 - 3032
  • [4] Self-Supervised Pre-Training for Attention-Based Encoder-Decoder ASR Model
    Gao, Changfeng
    Cheng, Gaofeng
    Li, Ta
    Zhang, Pengyuan
    Yan, Yonghong
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2022, 30 : 1763 - 1774
  • [5] Self-supervised Pre-training for Mirror Detection
    Lin, Jiaying
    Lau, Rynson W. H.
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 12193 - 12202
  • [6] Self-supervised Pre-training for Nuclei Segmentation
    Haq, Mohammad Minhazul
    Huang, Junzhou
    [J]. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT II, 2022, 13432 : 303 - 313
  • [7] EFFECTIVENESS OF SELF-SUPERVISED PRE-TRAINING FOR ASR
    Baevski, Alexei
    Mohamed, Abdelrahman
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 7694 - 7698
  • [8] AN ADAPTER BASED PRE-TRAINING FOR EFFICIENT AND SCALABLE SELF-SUPERVISED SPEECH REPRESENTATION LEARNING
    Kessler, Samuel
    Thomas, Bethan
    Karout, Salah
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3179 - 3183
  • [9] Contrast to Divide: Self-Supervised Pre-Training for Learning with Noisy Labels
    Zheltonozhskii, Evgenii
    Baskin, Chaim
    Mendelson, Avi
    Bronstein, Alex M.
    Litany, Or
    [J]. 2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 387 - 397
  • [10] Deep learning based on self-supervised pre-training: Application on sandstone content prediction
    Wang, Chong Ming
    Wang, Xing Jian
    Chen, Yang
    Wen, Xue Mei
    Zhang, Yong Heng
    Li, Qing Wu
    [J]. FRONTIERS IN EARTH SCIENCE, 2023, 10