Expressive user embedding from churn and recommendation multi-task learning

被引:1
|
作者
Bai, Huajun [1 ]
Liu, Davide [1 ]
Hirtz, Thomas [2 ]
Boulenger, Alexandre [3 ]
机构
[1] Genify, Beijing, Peoples R China
[2] Tsinghua Univ, Beijing, Peoples R China
[3] Genify, Abu Dhabi, U Arab Emirates
关键词
multi-task learning; self-attention; user representation; churn prediction; product recommendation;
D O I
10.1145/3543873.3587306
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we present a Multi-Task model for Recommendation and Churn prediction (MT) in the retail banking industry. The model leverages a hard parameter-sharing framework and consists of a shared multi-stack encoder with multi-head self-attention and two fully connected task heads. It is trained to achieve two multi-class classification tasks: predicting product churn and identifying the next-best products (NBP) for users, individually. Our experiments demonstrate the superiority of the multi-task model compared to its single-task versions, reaching top-1 precision at 78.1% and 77.6%, for churn and NBP prediction respectively. Moreover, we find that the model learns a coherent and expressive high-level representation reflecting user intentions related to both tasks. There is a clear separation between users with acquisitions and users with churn. In addition, acquirers are more tightly clustered compared to the churners. The gradual separability of churning and acquiring users, who diverge in intent, is a desirable property. It provides a basis for model explainability, critical to industry adoption, and also enables other downstream applications. These potential additional benefits, beyond reducing customer attrition and increasing product use-two primary concerns of businesses, make such a model even more valuable.
引用
收藏
页码:37 / 40
页数:4
相关论文
共 50 条
  • [41] Cross-lingual Sentence Embedding using Multi-Task Learning
    Goswami, Koustava
    Dutta, Sourav
    Assem, Haytham
    Fransen, Theodorus
    McCrae, John P.
    [J]. 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 9099 - 9113
  • [42] A Multi-task Text Classification Model Based on Label Embedding Learning
    Xu, Yuemei
    Fan, Zuwei
    Cao, Han
    [J]. CYBER SECURITY, CNCERT 2021, 2022, 1506 : 211 - 225
  • [43] A multi-task learning approach for improving travel recommendation with keywords generation
    Chen, Lei
    Cao, Jie
    Zhu, Guixiang
    Wang, Youquan
    Liang, Weichao
    [J]. KNOWLEDGE-BASED SYSTEMS, 2021, 233 (233)
  • [44] Explainable Recommendation via Multi-Task Learning in Opinionated Text Data
    Wang, Nan
    Wang, Hongning
    Jia, Yiling
    Yin, Yue
    [J]. ACM/SIGIR PROCEEDINGS 2018, 2018, : 165 - 174
  • [45] MTLAN: Multi-Task Learning and Auxiliary Network for Enhanced Sentence Embedding
    Liu, Gang
    Wang, Tongli
    Yang, Wenli
    Yan, Zhizheng
    Zhan, Kai
    [J]. NEURAL INFORMATION PROCESSING, ICONIP 2023, PT III, 2024, 14449 : 16 - 27
  • [46] SEBGM: Sentence Embedding Based on Generation Model with multi-task learning
    Wang, Qian
    Zhang, Weiqi
    Lei, Tianyi
    Cao, Yu
    Peng, Dezhong
    Wang, Xu
    [J]. COMPUTER SPEECH AND LANGUAGE, 2024, 87
  • [47] Recommendation Algorithm for Multi-Task Learning with Directed Graph Convolutional Networks
    Yin, Lifeng
    Lu, Jianzheng
    Zheng, Guanghai
    Chen, Huayue
    Deng, Wu
    [J]. APPLIED SCIENCES-BASEL, 2022, 12 (18):
  • [48] Knowledge-Enhanced Attributed Multi-Task Learning for Medicine Recommendation
    Zhang, Yingying
    Wu, Xian
    Fang, Quan
    Qian, Shengsheng
    Xu, Changsheng
    [J]. ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2023, 41 (01)
  • [49] Removing Hidden Confounding in Recommendation: A Unified Multi-Task Learning Approach
    Li, Haoxuan
    Wu, Kunhan
    Zheng, Chunyuan
    Xiao, Yanghao
    Wang, Hao
    Geng, Zhi
    Feng, Fuli
    He, Xiangnan
    Wu, Peng
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [50] Click is not equal to purchase: multi-task reinforcement learning for multi-behavior recommendation
    Zhang, Huiwang
    Zhao, Pengpeng
    Xian, Xuefeng
    Sheng, Victor S.
    Hao, Yongjing
    Cui, Zhiming
    [J]. WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2023, 26 (06): : 4153 - 4172