Voucher Abuse Detection with Prompt-based Fine-tuning on Graph Neural Networks

被引:0
|
作者
Wen, Zhihao [1 ]
Fang, Yuan [1 ]
Liu, Yihan [2 ]
Guo, Yang [2 ]
Hao, Shuji [2 ]
机构
[1] Singapore Management Univ, Singapore, Singapore
[2] Lazada Inc, Singapore, Singapore
关键词
Anomaly detection; graph neural networks; pre-training; prompt;
D O I
10.1145/3583780.3615505
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Voucher abuse detection is an important anomaly detection problem in E-commerce. While many GNN-based solutions have emerged, the supervised paradigm depends on a large quantity of labeled data. A popular alternative is to adopt self-supervised pre-training using label-free data, and further fine-tune on a downstream task with limited labels. Nevertheless, the "pre-train, fine-tune" paradigm is often plagued by the objective gap between pre-training and downstream tasks. Hence, we propose VPGNN, a prompt-based fine-tuning framework on GNNs for voucher abuse detection. We design a novel graph prompting function to reformulate the downstream task into a similar template as the pretext task in pre-training, thereby narrowing the objective gap. Extensive experiments on both proprietary and public datasets demonstrate the strength of VPGNN in both few-shot and semi-supervised scenarios. Moreover, an online evaluation of VPGNN shows a 23.4% improvement over two existing deployed models.
引用
收藏
页码:4864 / 4870
页数:7
相关论文
共 50 条
  • [31] Cold-Start Data Selection for Better Few-shot Language Model Fine-tuning: A Prompt-based Uncertainty Propagation Approach
    Yu, Yue
    Zhang, Rongzhi
    Xu, Ran
    Zhang, Jieyu
    Shen, Jiaming
    Zhang, Chao
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 2499 - 2521
  • [32] A convolutional neural network based Android malware detection method with dynamic fine-tuning
    Liu, Zhen
    Wang, Ruoyu
    Peng, Bitao
    Gan, Qingqing
    2022 32ND INTERNATIONAL TELECOMMUNICATION NETWORKS AND APPLICATIONS CONFERENCE (ITNAC), 2022, : 300 - 305
  • [33] Prompt Engineering or Fine-Tuning? A Case Study on Phishing Detection with Large Language Models
    Trad, Fouad
    Chehab, Ali
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2024, 6 (01): : 367 - 384
  • [34] Joint Fine-Tuning in Deep Neural Networks for Facial Expression Recognition
    Jung, Heechul
    Lee, Sihaeng
    Yim, Junho
    Park, Sunjeong
    Kim, Junmo
    2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 2983 - 2991
  • [35] Fine-tuning of a North Sea model with the aid of GMDH neural networks
    Schrijver, MC
    Saman, K
    Kerckhoffs, EJH
    SIMULATION IN INDUSTRY 2001, 2001, : 696 - 704
  • [36] Improving optimization of convolutional neural networks through parameter fine-tuning
    Becherer, Nicholas
    Pecarina, John
    Nykl, Scott
    Hopkinson, Kenneth
    NEURAL COMPUTING & APPLICATIONS, 2019, 31 (08): : 3469 - 3479
  • [37] Improving optimization of convolutional neural networks through parameter fine-tuning
    Nicholas Becherer
    John Pecarina
    Scott Nykl
    Kenneth Hopkinson
    Neural Computing and Applications, 2019, 31 : 3469 - 3479
  • [38] Comparison of Fine-Tuning and Extension Strategies for Deep Convolutional Neural Networks
    Pittaras, Nikiforos
    Markatopoulou, Foteini
    Mezaris, Vasileios
    Patras, Ioannis
    MULTIMEDIA MODELING (MMM 2017), PT I, 2017, 10132 : 102 - 114
  • [39] Fine-Tuning Graph Neural Networks via Active Learning: Unlocking the Potential of Graph Neural Networks Trained on Nonaqueous Systems for Aqueous CO2 Reduction
    Jiao, Zihao
    Mao, Yu
    Lu, Ruihu
    Liu, Ya
    Guo, Liejin
    Wang, Ziyun
    JOURNAL OF CHEMICAL THEORY AND COMPUTATION, 2025, 21 (06) : 3176 - 3186
  • [40] Fine-Tuning the Odds in Bayesian Networks
    Salmani, Bahare
    Katoen, Joost-Pieter
    SYMBOLIC AND QUANTITATIVE APPROACHES TO REASONING WITH UNCERTAINTY, ECSQARU 2021, 2021, 12897 : 268 - 283