PromptCast: A New Prompt-Based Learning Paradigm for Time Series Forecasting

被引:12
|
作者
Xue, Hao [1 ]
Salim, Flora D. [1 ]
机构
[1] Univ New South Wales, Sch Comp Sci & Engn, Sydney, NSW 2052, Australia
关键词
Forecasting; Predictive models; Task analysis; Time series analysis; Numerical models; Natural languages; Benchmark testing; Time series forecasting; natural language generation; dataset and benchmark;
D O I
10.1109/TKDE.2023.3342137
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a new perspective on time series forecasting. In existing time series forecasting methods, the models take a sequence of numerical values as input and yield numerical values as output. The existing SOTA models are largely based on the Transformer architecture, modified with multiple encoding mechanisms to incorporate the context and semantics around the historical data. Inspired by the successes of pre-trained language foundation models, we pose a question about whether these models can also be adapted to solve time-series forecasting. Thus, we propose a new forecasting paradigm: prompt-based time series forecasting (PromptCast). In this novel task, the numerical input and output are transformed into prompts and the forecasting task is framed in a sentence-to-sentence manner, making it possible to directly apply language models for forecasting purposes. To support and facilitate the research of this task, we also present a large-scale dataset (PISA) that includes three real-world forecasting scenarios. We evaluate different SOTA numerical-based forecasting methods and language generation models. The benchmark results with various forecasting settings demonstrate the proposed PromptCast with language generation models is a promising research direction. Additionally, in comparison to conventional numerical-based forecasting, PromptCast shows a much better generalization ability under the zero-shot setting.
引用
收藏
页码:6851 / 6864
页数:14
相关论文
共 50 条
  • [21] A new perspective on air quality index time series forecasting: A ternary interval decomposition ensemble learning paradigm
    Wang, Zicheng
    Gao, Ruobin
    Wang, Piao
    Chen, Huayou
    TECHNOLOGICAL FORECASTING AND SOCIAL CHANGE, 2023, 191
  • [22] Combating the COVID-19 infodemic using Prompt-Based curriculum learning
    Peng, Zifan
    Li, Mingchen
    Wang, Yue
    Ho, George T. S.
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 229
  • [23] Prompt text classifications with transformer models! An exemplary introduction to prompt-based learning with large language models
    Mayer, Christian W. F.
    Ludwig, Sabrina
    Brandt, Steffen
    JOURNAL OF RESEARCH ON TECHNOLOGY IN EDUCATION, 2023, 55 (01) : 125 - 141
  • [24] Prompt-Based Joint Contrastive Learning for Zero-Shot Relation Extraction
    Zou, Jianjian
    Xiao, Yuhui
    Zhou, Sichi
    Li, Wei
    Yang, Qun
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT I, NLPCC 2024, 2025, 15359 : 419 - 431
  • [25] Time Series Forecasting with an EMD-LSSVM-PSO Ensemble Adaptive Learning Paradigm
    Jiang, Tiejun
    Zhou, Chengjie
    Zhang, Huaiqiang
    2018 INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND INTELLIGENT SYSTEMS (CIIS 2018), 2018, : 44 - 50
  • [26] Joint contrastive learning for prompt-based few-shot language learners
    Zhengzhong Zhu
    Xuejie Zhang
    Jin Wang
    Xiaobing Zhou
    Neural Computing and Applications, 2024, 36 : 7861 - 7875
  • [27] Prompt-based Few-shot Learning for Table-based Fact Verification
    Hou, Lei
    Liu, Yubo
    Wu, Jie
    Hou, Mengshu
    2022 5TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND NATURAL LANGUAGE PROCESSING, MLNLP 2022, 2022, : 14 - 19
  • [28] Joint contrastive learning for prompt-based few-shot language learners
    Zhu, Zhengzhong
    Zhang, Xuejie
    Wang, Jin
    Zhou, Xiaobing
    NEURAL COMPUTING & APPLICATIONS, 2024, 36 (14): : 7861 - 7875
  • [29] Co-training Improves Prompt-based Learning for Large Language Models
    Lang, Hunter
    Agrawal, Monica
    Kim, Yoon
    Sontag, David
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [30] COVER: A Heuristic Greedy Adversarial Attack on Prompt-Based Learning in Language Models
    Chen, Qingliang (tpchen@jnu.edu.cn), 1600, Springer Science and Business Media Deutschland GmbH (14326 LNAI):