Dialog Act Segmentation and Classification in Vietnamese

被引:0
|
作者
Luong, Tho Chi [1 ]
Tran, Oanh Thi [2 ]
机构
[1] FPT Univ, FPT Technol Res Inst, Hanoi, Vietnam
[2] Vietnam Natl Univ, Int Sch, Hanoi, Vietnam
来源
INTELLIGENT COMPUTING, VOL 2 | 2022年 / 507卷
关键词
Dialog segmentation; Dialog act; Deep learning; Vietnamese retail domain; RECOGNITION;
D O I
10.1007/978-3-031-10464-0_40
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Natural Language Understanding (NLU) is a critical component in building a conversational system. So far, most systems have processed the user inputs at the utterance-level and assumed single dialog act (DA) per utterance. In fact, one utterance might contain more than one DA which are denoted by different continuous text spans inside it (a.k.a functional segments). As a step towards achieving natural and flexible interaction between human and machine especially in poor-resource languages, this paper presents a work for dialog segmentation (DS) and DA classification in Vietnamese. We first introduce the corpus and then systematically investigate different pipeline and joint learning approaches to deal the two tasks. Experimental results show that the joint learning approach is superior in boosting the performance of both tasks. It outperforms the conventional pipeline approach which looked at the two tasks separately. Moreover, to further enhance the final performance, this paper proposes a technique to enrich the models with useful DA knowledge. Compared to the standard models which don't use DA knowledge, we achieve considerably better results for two tasks. Specifically, we achieved an F1 score of 86% in segmenting dialogues, and an Fl-micro score of 74.75% in classifying DAs. This provides a strong foundation for future research on this interesting field.
引用
收藏
页码:594 / 604
页数:11
相关论文
共 50 条
  • [31] Multi-Task Network for Joint Dialog Act Recognition and Sentiment Classification
    Lin, Honghui
    Liu, Jianhua
    Zheng, Zhixiong
    Hu, Renyuan
    Luo, Yixuan
    Computer Engineering and Applications, 2024, 59 (03) : 104 - 111
  • [32] VALUE - A DIALOG IN ONE ACT
    LAIBMAN, D
    SCIENCE & SOCIETY, 1985, 48 (04) : 449 - 465
  • [33] Speech Act Classification in Vietnamese Utterance and Its Application in Smart Mobile Voice Interaction
    Thi-Lan Ngo
    Quang-Vu Duong
    Son-Bao Pham
    Xuan-Hieu Phan
    PROCEEDINGS OF THE SEVENTH SYMPOSIUM ON INFORMATION AND COMMUNICATION TECHNOLOGY (SOICT 2016), 2016, : 396 - 402
  • [34] On Speaker-Specific Prosodic Models for Automatic Dialog Act Segmentation of Multi-Party Meetings
    Kolar, Jachym
    Shriberg, Elizabeth
    Liu, Yang
    INTERSPEECH 2006 AND 9TH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING, VOLS 1-5, 2006, : 2014 - 2017
  • [35] CONTEXT-AWARE NEURAL-BASED DIALOG ACT CLASSIFICATION ON AUTOMATICALLY GENERATED TRANSCRIPTIONS
    Ortega, Daniel
    Li, Chia-Yu
    Vallejo, Gisela
    Denisov, Pavel
    Ngoc Thang Vu
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 7265 - 7269
  • [36] Using SVM and Error-correcting Codes for Multiclass Dialog Act Classification in Meeting Corpus
    Liu, Yang
    INTERSPEECH 2006 AND 9TH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING, VOLS 1-5, 2006, : 1938 - 1941
  • [37] Model adaptation for dialog act tagging
    Tur, Gokhan
    Guz, Umit
    Hakkani-Tuer, Dilek
    2006 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP, 2006, : 94 - +
  • [38] A novel ensemble model with two-stage learning for joint dialog act recognition and sentiment classification
    Xu, Yujun
    Yao, Enguang
    Liu, Chaoyue
    Liu, Qidong
    Xu, Mingliang
    PATTERN RECOGNITION LETTERS, 2023, 165 : 77 - 83
  • [39] Response Timing Estimation for Spoken Dialog System using Dialog Act Estimation
    Sakuma, Jin
    Fujie, Shinya
    Kobayashi, Tetsunori
    INTERSPEECH 2022, 2022, : 4486 - 4490
  • [40] A Hybrid Approach to Vietnamese Word Segmentation
    Tuan-Phong Nguyen
    Anh-Cuong Le
    2016 IEEE RIVF INTERNATIONAL CONFERENCE ON COMPUTING & COMMUNICATION TECHNOLOGIES, RESEARCH, INNOVATION, AND VISION FOR THE FUTURE (RIVF), 2016, : 114 - 119