Pre-trained Online Contrastive Learning for Insurance Fraud Detection

被引:0
|
作者
Zhang, Rui [1 ,2 ]
Cheng, Dawei [1 ,2 ,3 ]
Yang, Jie [1 ]
Ouyang, Yi [1 ,4 ]
Wu, Xian [4 ]
Zheng, Yefeng [4 ]
Jiang, Changjun [1 ,2 ]
机构
[1] Tongji Univ, Dept Comp Sci & Technol, Shanghai, Peoples R China
[2] Shanghai Artificial Intelligence Lab, Shanghai, Peoples R China
[3] Key Lab Artificial Intelligence, Minist Educ, Shanghai, Peoples R China
[4] Tencent YouTu Lab, Jarvis Res Ctr, Shenzhen, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
HEALTH-CARE;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Medical insurance fraud has always been a crucial challenge in the field of healthcare industry. Existing fraud detection models mostly focus on offline learning scenes. However, fraud patterns are constantly evolving, making it difficult for models trained on past data to detect newly emerging fraud patterns, posing a severe challenge in medical fraud detection. Moreover, current incremental learning models are mostly designed to address catastrophic forgetting, but often exhibit suboptimal performance in fraud detection. To address this challenge, this paper proposes an innovative online learning method for medical insurance fraud detection, named POCL. This method combines contrastive learning pre-training with online updating strategies. In the pre-training stage, we leverage contrastive learning pre-training to learn on historical data, enabling deep feature learning and obtaining rich risk representations. In the online learning stage, we adopt a Temporal Memory Aware Synapses online updating strategy, allowing the model to perform incremental learning and optimization based on continuously emerging new data. This ensures timely adaptation to fraud patterns and reduces forgetting of past knowledge. Our model undergoes extensive experiments and evaluations on real-world insurance fraud datasets. The results demonstrate our model has significant advantages in accuracy compared to the state-of-the-art baseline methods, while also exhibiting lower running time and space consumption. Our sources are released at https://github.com/finint/POCL.
引用
收藏
页码:22511 / 22519
页数:9
相关论文
共 50 条
  • [11] Inverse Problems Leveraging Pre-trained Contrastive Representations
    Ravula, Sriram
    Smyrnis, Georgios
    Jordan, Matt
    Dimakis, Alexandros G.
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [12] Co-speech Gesture Synthesis by Reinforcement Learning with Contrastive Pre-trained Rewards
    Sun, Mingyang
    Zhao, Mengchen
    Hou, Yaqing
    Li, Minglei
    Xu, Huang
    Xu, Songcen
    Hao, Jianye
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 2331 - 2340
  • [13] Focused Contrastive Loss for Classification With Pre-Trained Language Models
    He, Jiayuan
    Li, Yuan
    Zhai, Zenan
    Fang, Biaoyan
    Thorne, Camilo
    Druckenbrodt, Christian
    Akhondi, Saber
    Verspoor, Karin
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (07) : 3047 - 3061
  • [14] Multi-task Learning Based Online Dialogic Instruction Detection with Pre-trained Language Models
    Hao, Yang
    Li, Hang
    Ding, Wenbiao
    Wu, Zhongqin
    Tang, Jiliang
    Luckin, Rose
    Liu, Zitao
    [J]. ARTIFICIAL INTELLIGENCE IN EDUCATION (AIED 2021), PT II, 2021, 12749 : 183 - 189
  • [15] A graph-based contrastive learning framework for medicare insurance fraud detection
    Xiao, Song
    Bai, Ting
    Cui, Xiangchong
    Wu, Bin
    Meng, Xinkai
    Wang, Bai
    [J]. FRONTIERS OF COMPUTER SCIENCE, 2023, 17 (02)
  • [16] A graph-based contrastive learning framework for medicare insurance fraud detection
    Song Xiao
    Ting Bai
    Xiangchong Cui
    Bin Wu
    Xinkai Meng
    Bai Wang
    [J]. Frontiers of Computer Science, 2023, 17
  • [17] ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning
    Qin, Yujia
    Lin, Yankai
    Takanobu, Ryuichi
    Liu, Zhiyuan
    Li, Peng
    Ji, Heng
    Huang, Minlie
    Sun, Maosong
    Zhou, Jie
    [J]. 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2021), VOL 1, 2021, : 3350 - 3363
  • [18] Explanation Graph Generation via Pre-trained Language Models: An Empirical Study with Contrastive Learning
    Saha, Swarnadeep
    Yadav, Prateek
    Bansal, Mohit
    [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 1190 - 1208
  • [19] Mass detection in mammograms using pre-trained deep learning models
    Agarwal, Richa
    Diaz, Oliver
    Llado, Xavier
    Marti, Robert
    [J]. 14TH INTERNATIONAL WORKSHOP ON BREAST IMAGING (IWBI 2018), 2018, 10718
  • [20] Are Pre-trained Convolutions Better than Pre-trained Transformers?
    Tay, Yi
    Dehghani, Mostafa
    Gupta, Jai
    Aribandi, Vamsi
    Bahri, Dara
    Qin, Zhen
    Metzler, Donald
    [J]. 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2021), VOL 1, 2021, : 4349 - 4359