Working Memory Theory Driven Natural Attribute Prediction Model for Social Media User Profiling

被引:0
|
作者
Liu J. [1 ]
Li L. [2 ]
Long S. [2 ]
Wang C. [2 ]
机构
[1] School of Computer Science, Hubei University of Technology, Wuhan
[2] School of Computer Science and Artificial Intelligence, Wuhan University of Technology, Wuhan
基金
中国国家自然科学基金;
关键词
Natural Attribute; Social Media; User Profiling; Working Memory Theory;
D O I
10.16451/j.cnki.issn1003-6059.202310002
中图分类号
学科分类号
摘要
Constructing user profiling systems using contents generated by social media user can offer personalized services and precise marketing for e-commerce platform. It is a significant research direction in the field of social media analysis. In this paper, the document-level multimodal data formed by users publishing content chronologically is studied, and the challenges brought by that to user profiling are analyzed. Aiming at the natural attribute primarily related to user gender and birth year, how to deal with and analyze the document-level multimodal data posted by social media users efficiently is studied as well. A natural attribute prediction model for social media user profiling is proposed. Inspired by cognitive psychology, an effective data chunking method is designed via working memory theory to alleviate the problems of semantics broken and synthetic discourse in traditional methods. To solve the problem of user content preference, an attention mechanism is employed to balance task contributions between intra-modal and inter-modal data. Experiments show that the proposed model is superior in user gender and birth year prediction. © 2023 Journal of Pattern Recognition and Artificial Intelligence. All rights reserved.
引用
收藏
页码:877 / 889
页数:12
相关论文
共 38 条
  • [31] DEVLIN J, CHANG M W, LEE K, Et al., BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, Proc of the Conference of the North American Chapter of the Association for Computational Linguistics (Long and Short Papers), pp. 4171-4186, (2019)
  • [32] BROWN T B, MANN B, RYDER N, Et al., Language Models Are Few-Shot Learners, Proc of the 34th International Conference on Neural Information Processing Systems, pp. 1877-1901, (2020)
  • [33] LI J N, SELVARAJU R, GOTMARE A D, Et al., Align Before Fuse: Vision and Language Representation Learning with Momentum Distillation
  • [34] LI J N, LI D X, XIONG C M, Et al., Blip: Bootstrapping Language-Image Pre-Training for Unified Vision-Language Understanding and Generation, Proceeding of the Machine Learning Research, 162, pp. 12888-12900, (2022)
  • [35] HUANG Z C, ZENG Z Y, HUANG Y P, Et al., Seeing Out of the Box: End-to-End Pre-Training for Vision-Language Representation Learning, Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12971-12980, (2021)
  • [36] LIU Y G, SINGH L, MNEIMNEH Z., A Comparative Analysis of Classic and Deep Learning Models for Inferring Gender and Age of Twitter Users, Proc of the 2nd International Conference on Deep Learning Theory and Applications, pp. 48-58, (2021)
  • [37] WIEGMANN M, STEIN B, POTTHAST M., Celebrity Profiling, Proc of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2611-2618, (2019)
  • [38] DING M, ZHOU C, YANG H X, Et al., CogLTX: Applying BERT to Long Texts, Proc of the 34th International Conference on Neural Information Processing Systems, pp. 12792-12804, (2020)