Multi-modal microblog classification via multi-task learning

被引:26
|
作者
Zhao, Sicheng [1 ]
Yao, Hongxun [1 ]
Zhao, Sendong [1 ]
Jiang, Xuesong [1 ]
Jiang, Xiaolei [1 ]
机构
[1] Harbin Inst Technol, Sch Comp Sci & Technol, Harbin, Peoples R China
基金
中国国家自然科学基金;
关键词
Microblog classification; Multi-modal classification; Multi-task learning; Structural regularization; Social media analysis; IMAGE; RETRIEVAL;
D O I
10.1007/s11042-014-2342-2
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent years have witnessed the flourishing of social media platforms (SMPs), such as Twitter, Facebook, and Sina Weibo. The rapid development of these SMPs has resulted in increasingly large scale multimedia data, which has been proved with remarkable marketing values. It is in an urgent need to classify these social media data into a specified list of concerned entities, such as brands, products, and events, to analyze their sales, popularity or influences. But this is a rather challenging task due to the shortness, conversationality, the incompatibility between images and text, and the data diversity of microblogs. In this paper, we present a multi-modal microblog classification method in a multi-task learning framework. Firstly features of different modalities are extracted for each microblog. Specifically, we extract TF-IDF features for each microblog text and low-level visual features and high-level semantic features for each microblog image. Then multiple related classification tasks are learned simultaneously for each feature to increase the sample size for each task and improve the prediction performance. Finally the outputs of each feature are integrated by a Support Vector Machine that learns how to optimally combine and weight each feature. We evaluate the proposed method on Brand-Social-Net to classify the contained 100 brands. Experimental results demonstrate the superiority of the proposed method, as compared to the state-of-the-art approaches.
引用
收藏
页码:8921 / 8938
页数:18
相关论文
共 50 条
  • [1] Multi-modal microblog classification via multi-task learning
    Sicheng Zhao
    Hongxun Yao
    Sendong Zhao
    Xuesong Jiang
    Xiaolei Jiang
    [J]. Multimedia Tools and Applications, 2016, 75 : 8921 - 8938
  • [2] Personalized Microblog Sentiment Classification via Multi-Task Learning
    Wu, Fangzhao
    Huang, Yongfeng
    [J]. THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 3059 - 3065
  • [3] Twitter Demographic Classification Using Deep Multi-modal Multi-task Learning
    Vijayaraghavan, Prashanth
    Vosoughi, Soroush
    Roy, Deb
    [J]. PROCEEDINGS OF THE 55TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2017), VOL 2, 2017, : 478 - 483
  • [4] Cloud Type Classification Using Multi-modal Information Based on Multi-task Learning
    Zhang, Yaxiu
    Xie, Jiazu
    He, Di
    Dong, Qing
    Zhang, Jiafeng
    Zhang, Zhong
    Liu, Shuang
    [J]. COMMUNICATIONS, SIGNAL PROCESSING, AND SYSTEMS, VOL. 1, 2022, 878 : 119 - 125
  • [5] MultiNet: Multi-Modal Multi-Task Learning for Autonomous Driving
    Chowdhuri, Sauhaarda
    Pankaj, Tushar
    Zipser, Karl
    [J]. 2019 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2019, : 1496 - 1504
  • [6] Multi-Modal Multi-Task Learning for Automatic Dietary Assessment
    Liu, Qi
    Zhang, Yue
    Liu, Zhenguang
    Yuan, Ye
    Cheng, Li
    Zimmermann, Roger
    [J]. THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 2347 - 2354
  • [7] Multi-task Classification Model Based On Multi-modal Glioma Data
    Li, Jialun
    Jin, Yuanyuan
    Yu, Hao
    Wang, Xiaoling
    Zhuang, Qiyuan
    Chen, Liang
    [J]. 11TH IEEE INTERNATIONAL CONFERENCE ON KNOWLEDGE GRAPH (ICKG 2020), 2020, : 165 - 172
  • [8] Multi-Task and Multi-Modal Learning for RGB Dynamic Gesture Recognition
    Fan, Dinghao
    Lu, Hengjie
    Xu, Shugong
    Cao, Shan
    [J]. IEEE SENSORS JOURNAL, 2021, 21 (23) : 27026 - 27036
  • [9] Multi-modal embeddings using multi-task learning for emotion recognition
    Khare, Aparna
    Parthasarathy, Srinivas
    Sundaram, Shiva
    [J]. INTERSPEECH 2020, 2020, : 384 - 388
  • [10] Multi-task Learning for Multi-modal Emotion Recognition and Sentiment Analysis
    Akhtar, Md Shad
    Chauhan, Dushyant Singh
    Ghosal, Deepanway
    Poria, Soujanya
    Ekbal, Asif
    Bhattacharyya, Pushpak
    [J]. 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, 2019, : 370 - 379