Pre-trained Online Contrastive Learning for Insurance Fraud Detection

被引:0
|
作者
Zhang, Rui [1 ,2 ]
Cheng, Dawei [1 ,2 ,3 ]
Yang, Jie [1 ]
Ouyang, Yi [1 ,4 ]
Wu, Xian [4 ]
Zheng, Yefeng [4 ]
Jiang, Changjun [1 ,2 ]
机构
[1] Tongji Univ, Dept Comp Sci & Technol, Shanghai, Peoples R China
[2] Shanghai Artificial Intelligence Lab, Shanghai, Peoples R China
[3] Key Lab Artificial Intelligence, Minist Educ, Shanghai, Peoples R China
[4] Tencent YouTu Lab, Jarvis Res Ctr, Shenzhen, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
HEALTH-CARE;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Medical insurance fraud has always been a crucial challenge in the field of healthcare industry. Existing fraud detection models mostly focus on offline learning scenes. However, fraud patterns are constantly evolving, making it difficult for models trained on past data to detect newly emerging fraud patterns, posing a severe challenge in medical fraud detection. Moreover, current incremental learning models are mostly designed to address catastrophic forgetting, but often exhibit suboptimal performance in fraud detection. To address this challenge, this paper proposes an innovative online learning method for medical insurance fraud detection, named POCL. This method combines contrastive learning pre-training with online updating strategies. In the pre-training stage, we leverage contrastive learning pre-training to learn on historical data, enabling deep feature learning and obtaining rich risk representations. In the online learning stage, we adopt a Temporal Memory Aware Synapses online updating strategy, allowing the model to perform incremental learning and optimization based on continuously emerging new data. This ensures timely adaptation to fraud patterns and reduces forgetting of past knowledge. Our model undergoes extensive experiments and evaluations on real-world insurance fraud datasets. The results demonstrate our model has significant advantages in accuracy compared to the state-of-the-art baseline methods, while also exhibiting lower running time and space consumption. Our sources are released at https://github.com/finint/POCL.
引用
收藏
页码:22511 / 22519
页数:9
相关论文
共 50 条
  • [1] Transfer learning of pre-trained CNNs on digital transaction fraud detection
    Tekkali, Chandana Gouri
    Natarajan, Karthika
    [J]. International Journal of Knowledge-Based and Intelligent Engineering Systems, 2024, 28 (03) : 571 - 580
  • [2] CATE: A Contrastive Pre-trained Model for Metaphor Detection with Semi-supervised Learning
    Lin, Zhenxi
    Ma, Qianli
    Yan, Jiangyue
    Chen, Jieyu
    [J]. 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 3888 - 3898
  • [3] AWEncoder: Adversarial Watermarking Pre-Trained Encoders in Contrastive Learning
    Zhang, Tianxing
    Wu, Hanzhou
    Lu, Xiaofeng
    Han, Gengle
    Sun, Guangling
    [J]. APPLIED SCIENCES-BASEL, 2023, 13 (06):
  • [4] Manipulating Pre-Trained Encoder for Targeted Poisoning Attacks in Contrastive Learning
    Chen, Jian
    Gao, Yuan
    Liu, Gaoyang
    Abdelmoniem, Ahmed M.
    Wang, Chen
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 2412 - 2424
  • [5] IPES: Improved Pre-trained Encoder Stealing Attack in Contrastive Learning
    Zhang, Chuan
    Li, Zhuopeng
    Liang, Haotian
    Liang, Jinwen
    Liu, Ximeng
    Zhu, Liehuang
    [J]. 2023 IEEE INTERNATIONAL CONFERENCES ON INTERNET OF THINGS, ITHINGS IEEE GREEN COMPUTING AND COMMUNICATIONS, GREENCOM IEEE CYBER, PHYSICAL AND SOCIAL COMPUTING, CPSCOM IEEE SMART DATA, SMARTDATA AND IEEE CONGRESS ON CYBERMATICS,CYBERMATICS, 2024, : 354 - 361
  • [6] Syntax-guided Contrastive Learning for Pre-trained Language Model
    Zhang, Shuai
    Wang, Lijie
    Xiao, Xinyan
    Wu, Hua
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 2430 - 2440
  • [7] Clinical diagnosis normalization based on contrastive learning and pre-trained model
    Liu, Ying
    Cui, Bingjian
    Cao, Liu
    Cheng, Longlong
    [J]. Huazhong Keji Daxue Xuebao (Ziran Kexue Ban)/Journal of Huazhong University of Science and Technology (Natural Science Edition), 2024, 52 (05): : 23 - 28
  • [8] ContraBERT: Enhancing Code Pre-trained Models via Contrastive Learning
    Liu, Shangqing
    Wu, Bozhi
    Xie, Xiaofei
    Meng, Guozhu
    Liu, Yang
    [J]. 2023 IEEE/ACM 45TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING, ICSE, 2023, : 2476 - 2487
  • [9] EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning
    Liu, Hongbin
    Jia, Jinyuan
    Qu, Wenjie
    Gong, Neil Zhenqiang
    [J]. CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 2081 - 2095
  • [10] Online Fake News Detection using Pre-trained Embeddings
    Reshi, Junaid Ali
    Ali, Rashid
    [J]. 2022 5TH INTERNATIONAL CONFERENCE ON MULTIMEDIA, SIGNAL PROCESSING AND COMMUNICATION TECHNOLOGIES (IMPACT), 2022,