Prompt-based Pre-trained Model for Personality and Interpersonal Reactivity Prediction

被引:0
|
作者
Li, Bin [1 ]
Weng, Yixuan [2 ]
Song, Qiya [1 ]
Ma, Fuyan [1 ]
Sun, Bin [1 ]
Li, Shutao [1 ]
机构
[1] Hunan Univ, Coll Elect & Informat Engn, Changsha, Peoples R China
[2] Chinese Acad Sci, Natl Lab Pattern Recognit, Inst Automat, Beijing 100190, Peoples R China
来源
PROCEEDINGS OF THE 12TH WORKSHOP ON COMPUTATIONAL APPROACHES TO SUBJECTIVITY, SENTIMENT & SOCIAL MEDIA ANALYSIS | 2022年
基金
国家重点研发计划;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper describes the LingJing team's method to the Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis (WASSA) 2022 shared task on Personality Prediction (PER) and Reactivity Index Prediction (IRI). In this paper, we adopt the prompt-based method with the pre-trained language model to accomplish these tasks. Specifically, the prompt is designed to provide knowledge of the extra personalized information for enhancing the pre-trained model. Data augmentation and model ensemble are adopted for obtaining better results. Extensive experiments are performed, which shows the effectiveness of the proposed method. On the final submission, our system achieves a Pearson Correlation Coefficient of 0.2301 and 0.2546 on Track 3 and Track 4 respectively. We ranked 1st on both sub-tasks.
引用
收藏
页码:265 / 270
页数:6
相关论文
共 50 条
  • [1] Prompt-based system for personality and interpersonal reactivity prediction
    Li, Bin
    Weng, Yixuan
    SOFTWARE IMPACTS, 2022, 12
  • [2] Relational Prompt-Based Pre-Trained Language Models for Social Event Detection
    Li, Pu
    Yu, Xiaoyan
    Peng, Hao
    Xian, Yantuan
    Wang, Linqin
    Sun, Li
    Zhang, Jingyun
    Yu, Philip S.
    ACM Transactions on Information Systems, 2024, 43 (01)
  • [3] On Robustness of Prompt-based Semantic Parsing with Large Pre-trained Language Model: An Empirical Study on Codex
    Zhuo, Terry Yue
    Li, Zhuang
    Huang, Yujin
    Shiri, Fatemeh
    Wang, Weiqing
    Haffari, Gholamreza
    Li, Yuan-Fang
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 1090 - 1102
  • [4] The Biases of Pre-Trained Language Models: An Empirical Study on Prompt-Based Sentiment Analysis and Emotion Detection
    Mao, Rui
    Liu, Qian
    He, Kai
    Li, Wei
    Cambria, Erik
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (03) : 1743 - 1753
  • [5] PLPMpro: Enhancing promoter sequence prediction with prompt-learning based pre-trained language model
    Li, Zhongshen
    Jin, Junru
    Long, Wentao
    Wei, Leyi
    COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 164
  • [6] Prediction of degradation for mRNA vaccines based on pre-trained model
    Fan, Jixiang
    Liu, Chuang
    Zhang, Jianzhang
    Zhan, Xiuxiu
    Hu, Huajun
    PROCEEDINGS OF 2024 4TH INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND INTELLIGENT COMPUTING, BIC 2024, 2024, : 455 - 459
  • [7] Talent Supply and Demand Matching Based on Prompt Learning and the Pre-Trained Language Model
    Li, Kunping
    Liu, Jianhua
    Zhuang, Cunbo
    APPLIED SCIENCES-BASEL, 2025, 15 (05):
  • [8] BEYOND SIMPLE TEXT STYLE TRANSFER: UNVEILING COMPOUND TEXT STYLE TRANSFER WITH PROMPT-BASED PRE-TRAINED LANGUAGE MODELS
    Ju, Shuai
    Wang, Chenxu
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 6850 - 6854
  • [9] Prompt Your Brain: Scaffold Prompt Tuning for Efficient Adaptation of fMRI Pre-trained Model
    Dong, Zijian
    Wu, Yilei
    Chen, Zijiao
    Zhang, Yichi
    Jin, Yueming
    Zhou, Juan Helen
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2024, PT XI, 2024, 15011 : 512 - 521
  • [10] Prompt Tuning for Discriminative Pre-trained Language Models
    Yao, Yuan
    Dong, Bowen
    Zhang, Ao
    Zhang, Zhengyan
    Xie, Ruobing
    Liu, Zhiyuan
    Lin, Leyu
    Sun, Maosong
    Wang, Jianyong
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 3468 - 3473