Heuristic personality recognition based on fusing multiple conversations and utterance-level affection

被引:0
|
作者
He, Haijun [1 ]
Li, Bobo [1 ]
Xiong, Yiyun [1 ]
Zheng, Li [1 ]
He, Kang [1 ]
Li, Fei [1 ]
Ji, Donghong [1 ]
机构
[1] Wuhan Univ, Sch Cyber Sci & Engn, Key Lab Aerosp Informat Secur & Trusted Comp, Minist Educ, Wuhan, Peoples R China
基金
中国国家自然科学基金;
关键词
Personality recognition; Conversation analysis; Natural language processing;
D O I
10.1016/j.ipm.2024.103931
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Personality Recognition in Conversations (PRC) is a task of significant interest and practical value. Existing studies on the PRC task utilize conversation inadequately and neglect affective information. Considering the way of information processing of these studies is not yet close enough to the concept of personality, we propose the SAH-GCN model for the PRC task in this study. This model initially processes the original conversation input to extract the central speaker feature. Leveraging Contrastive Learning, it continuously adjusts the embedding of each utterance by incorporating affective information to cope with the semantic similarity. Subsequently, the model employs Graph Convolutional Networks to simulate the conversation dynamics, ensuring comprehensive interaction between the central speaker feature and other relevant features. Lastly, it heuristically fuses central speaker features from multiple conversations involving the same speaker into one comprehensive feature, facilitating personality recognition. We conduct experiments using the recently released CPED dataset, which is the personality dataset encompassing affection labels and conversation details. Our results demonstrate that SAH-GCN achieves superior accuracy (+1.88%) compared to prior works on the PRC task. Further analysis verifies the efficacy of our scheme that fuses multiple conversations and incorporates affective information for personality recognition.
引用
收藏
页数:13
相关论文
共 36 条
  • [21] UTTERANCE-LEVEL END-TO-END LANGUAGE IDENTIFICATION USING ATTENTION-BASED CNN-BLSTM
    Cai, Weicheng
    Cai, Danwei
    Huang, Shen
    Li, Ming
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 5991 - 5995
  • [22] Joint Autoregressive Modeling of End-to-End Multi-Talker Overlapped Speech Recognition and Utterance-level Timestamp Prediction
    Makishima, Naoki
    Suzuki, Keita
    Suzuki, Satoshi
    Ando, Atsushi
    Masumura, Ryo
    INTERSPEECH 2023, 2023, : 2913 - 2917
  • [23] UTTERANCE-LEVEL SEQUENTIAL MODELING FOR DEEP GAUSSIAN PROCESS BASED SPEECH SYNTHESIS USING SIMPLE RECURRENT UNIT
    Koriyama, Tomoki
    Saruwatari, Hiroshi
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 7249 - 7253
  • [24] Fusing Multiple Features for Depth-Based Action Recognition
    Zhu, Yu
    Chen, Wenbin
    Guo, Guodong
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2015, 6 (02)
  • [25] Fusing landmark-based features at kernel level for face recognition
    Huang, Ke-Kun
    Dai, Dao-Qing
    Ren, Chuan-Xian
    Yu, Yu-Feng
    Lai, Zhao-Rong
    PATTERN RECOGNITION, 2017, 63 : 406 - 415
  • [26] Accurate and robust facial expressions recognition by fusing multiple sparse representation based classifiers
    Yan Ouyang
    Nong Sang
    Rui Huang
    NEUROCOMPUTING, 2015, 149 : 71 - 78
  • [27] Multi-view Facial Expression Recognition Based on Fusing Low-level and Mid-level Features
    Bi, Mingyue
    Ma, Xin
    Song, Rui
    Rong, Xuewen
    Li, Yibin
    2018 37TH CHINESE CONTROL CONFERENCE (CCC), 2018, : 9083 - 9088
  • [28] Feature-level fusion method based on KFDA for multimodal recognition fusing ear and profile face
    Xu, Xiao-Na
    Mu, Zhi-Chun
    Yuan, Li
    2007 INTERNATIONAL CONFERENCE ON WAVELET ANALYSIS AND PATTERN RECOGNITION, VOLS 1-4, PROCEEDINGS, 2007, : 1306 - 1310
  • [29] Fusing multiple features for Fourier Mellin-based face recognition with single example image per person
    Chen, Yee Ming
    Chiang, Jen-Hong
    NEUROCOMPUTING, 2010, 73 (16-18) : 3089 - 3096