LLMs to the Moon? Reddit Market Sentiment Analysis with Large Language Models

被引:3
|
作者
Deng, Xiang [1 ,4 ]
Bashlovkina, Vasilisa [2 ]
Han, Feng [2 ]
Baumgartner, Simon [2 ]
Bendersky, Michael [3 ]
机构
[1] Ohio State Univ, Columbus, OH 43210 USA
[2] Google Res, NYC, New York, NY USA
[3] Google Res, Mountain View, CA USA
[4] Google, Mountain View, CA 94043 USA
关键词
Sentiment Analysis; Social Media; Finance; Large Language Model; Natural Language Processing; TEXTUAL ANALYSIS;
D O I
10.1145/3543873.3587605
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Market sentiment analysis on social media content requires knowledge of both financial markets and social media jargon, which makes it a challenging task for human raters. The resulting lack of high-quality labeled data stands in the way of conventional supervised learning methods. In this work, we conduct a case study approaching this problem with semi-supervised learning using a large language model (LLM). We select Reddit as the target social media platform due to its broad coverage of topics and content types. Our pipeline first generates weak financial sentiment labels for Reddit posts with an LLM and then uses that data to train a small model that can be served in production. We find that prompting the LLM to produce Chain-of-Thought summaries and forcing it through several reasoning paths helps generate more stable and accurate labels, while training the student model using a regression loss further improves distillation quality. With only a handful of prompts, the final model performs on par with existing supervised models. Though production applications of our model are limited by ethical considerations, the model's competitive performance points to the great potential of using LLMs for tasks that otherwise require skill-intensive annotation.
引用
收藏
页码:1014 / 1019
页数:6
相关论文
共 50 条
  • [31] Enhancing Financial Sentiment Analysis via Retrieval Augmented Large Language Models
    Zhang, Boyu
    Yang, Hongyang
    Zhou, Tianyu
    Babar, Ali
    Liu, Xiao-Yang
    PROCEEDINGS OF THE 4TH ACM INTERNATIONAL CONFERENCE ON AI IN FINANCE, ICAIF 2023, 2023, : 349 - 356
  • [32] Examining otolaryngologists' attitudes towards large language models (LLMs) such as ChatGPT: a comprehensive deep learning analysis
    Praveen, S. V.
    Vijaya, S.
    EUROPEAN ARCHIVES OF OTO-RHINO-LARYNGOLOGY, 2024, 281 (02) : 1061 - 1063
  • [33] Examining otolaryngologists’ attitudes towards large language models (LLMs) such as ChatGPT: a comprehensive deep learning analysis
    S. V. Praveen
    S. Vijaya
    European Archives of Oto-Rhino-Laryngology, 2024, 281 : 1061 - 1063
  • [34] EchoSwift An Inference Benchmarking and Configuration Discovery Tool for Large Language Models (LLMs)
    Krishna, Karthik
    Bandili, Ramana
    COMPANION OF THE 15TH ACM/SPEC INTERNATIONAL CONFERENCE ON PERFORMANCE ENGINEERING, ICPE COMPANION 2024, 2024, : 158 - 162
  • [35] Legal large language models (LLMs): legal dynamos or “fancifully packaged ChatGPT”?
    Fife Ogunde
    Discover Artificial Intelligence, 5 (1):
  • [36] Sentiment as a shipping market predictor: Testing market-specific language models
    Sui, Cong
    Wang, Shuhan
    Zheng, Wei
    TRANSPORTATION RESEARCH PART E-LOGISTICS AND TRANSPORTATION REVIEW, 2024, 189
  • [37] Enabling access to large-language models (LLMs) at scale for higher education
    Nadel, Peter
    Maloney, Delilah
    Monahan, Kyle M.
    PRACTICE AND EXPERIENCE IN ADVANCED RESEARCH COMPUTING 2024, PEARC 2024, 2024,
  • [38] Adversarial attacks and defenses for large language models (LLMs): methods, frameworks & challenges
    Kumar, Pranjal
    INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL, 2024, 13 (03)
  • [39] The ethics of ChatGPT in medicine and healthcare: a systematic review on Large Language Models (LLMs)
    Haltaufderheide, Joschka
    Ranisch, Robert
    NPJ DIGITAL MEDICINE, 2024, 7 (01):
  • [40] Causality Extraction from Medical Text Using Large Language Models (LLMs)
    Gopalakrishnan, Seethalakshmi
    Garbayo, Luciana
    Zadrozny, Wlodek
    Information (Switzerland), 2025, 16 (01)