Prompt Tuning on Graph-Augmented Low-Resource Text Classification

被引:0
|
作者
Wen, Zhihao [1 ]
Fang, Yuan [1 ]
机构
[1] Singapore Management University, School of Computing and Information Systems, Singapore,188065, Singapore
关键词
D O I
10.1109/TKDE.2024.3440068
中图分类号
学科分类号
摘要
Text classification is a fundamental problem in information retrieval with many real-world applications, such as predicting the topics of online articles and the categories of e-commerce product descriptions. However, low-resource text classification, with no or few labeled samples, presents a serious concern for supervised learning. Meanwhile, many text data are inherently grounded on a network structure, such as a hyperlink/citation network for online articles, and a user-item purchase network for e-commerce products. These graph structures capture rich semantic relationships, which can potentially augment low-resource text classification. In this paper, we propose a novel model called Graph-Grounded Pre-training and Prompting (G2P2) to address low-resource text classification in a two-pronged approach. During pre-training, we propose three graph interaction-based contrastive strategies to jointly pre-train a graph-text model; during downstream classification, we explore handcrafted discrete prompts and continuous prompt tuning for the jointly pre-trained model to achieve zero- and few-shot classification, respectively. Moreover, we explore the possibility of employing continuous prompt tuning for zero-shot inference. Specifically, we aim to generalize continuous prompts to unseen classes while leveraging a set of base classes. To this end, we extend G2P2 into G2P2∗, hinging on a new architecture of conditional prompt tuning. Extensive experiments on four real-world datasets demonstrate the strength of G2P2 in zero- and few-shot low-resource text classification tasks, and illustrate the advantage of G2P2∗ in dealing with unseen classes. © 1989-2012 IEEE.
引用
收藏
页码:9080 / 9095
相关论文
共 50 条
  • [1] Prompt-based for Low-Resource Tibetan Text Classification
    An, Bo
    [J]. ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2023, 22 (08)
  • [2] The Power of Prompt Tuning for Low-Resource Semantic Parsing
    Schucher, Nathan
    Reddy, Siva
    de Vries, Harm
    [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022): (SHORT PAPERS), VOL 2, 2022, : 148 - 156
  • [3] Unifying Graph Retrieval and Prompt Tuning for Graph-Grounded Text Classification
    Dai, Le
    Yin, Yu
    Chen, Enhong
    Xiong, Hui
    [J]. PROCEEDINGS OF THE 47TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2024, 2024, : 2682 - 2686
  • [4] PromptEM: Prompt-tuning for Low-resource Generalized Entity Matching
    Wang, Pengfei
    Zeng, Xiaocan
    Chen, Lu
    Ye, Fan
    Mao, Yuren
    Zhu, Junhao
    Gao, Yunjun
    [J]. PROCEEDINGS OF THE VLDB ENDOWMENT, 2022, 16 (02): : 369 - 378
  • [5] Augmenting Low-Resource Text Classification with Graph-Grounded Pre-training and Prompting
    Wen, Zhihao
    Fang, Yuan
    [J]. PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023, 2023, : 506 - 516
  • [6] RDF-to-Text Generation with Graph-augmented Structural Neural Encoders
    Gao, Hanning
    Wu, Lingfei
    Hu, Po
    Xu, Fangli
    [J]. PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 3030 - 3036
  • [7] Exploring low-resource medical image classification with weakly supervised prompt learning
    Zheng, Fudan
    Cao, Jindong
    Yu, Weijiang
    Chen, Zhiguang
    Xiao, Nong
    Lu, Yutong
    [J]. PATTERN RECOGNITION, 2024, 149
  • [8] Multi-Stage Prompt Tuning for Political Perspective Detection in Low-Resource Settings
    Kim, Kang-Min
    Lee, Mingyu
    Won, Hyun-Sik
    Kim, Min-Ji
    Kim, Yeachan
    Lee, SangKeun
    [J]. APPLIED SCIENCES-BASEL, 2023, 13 (10):
  • [9] PTR: Prompt Tuning with Rules for Text Classification
    Han, Xu
    Zhao, Weilin
    Ding, Ning
    Liu, Zhiyuan
    Sun, Maosong
    [J]. AI OPEN, 2022, 3 : 182 - 192
  • [10] Low-resource text classification using domain-adversarial learning
    Griesshaber, Daniel
    Ngoc Thang Vu
    Maucher, Johannes
    [J]. COMPUTER SPEECH AND LANGUAGE, 2020, 62