Expressive user embedding from churn and recommendation multi-task learning

被引:1
|
作者
Bai, Huajun [1 ]
Liu, Davide [1 ]
Hirtz, Thomas [2 ]
Boulenger, Alexandre [3 ]
机构
[1] Genify, Beijing, Peoples R China
[2] Tsinghua Univ, Beijing, Peoples R China
[3] Genify, Abu Dhabi, U Arab Emirates
关键词
multi-task learning; self-attention; user representation; churn prediction; product recommendation;
D O I
10.1145/3543873.3587306
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we present a Multi-Task model for Recommendation and Churn prediction (MT) in the retail banking industry. The model leverages a hard parameter-sharing framework and consists of a shared multi-stack encoder with multi-head self-attention and two fully connected task heads. It is trained to achieve two multi-class classification tasks: predicting product churn and identifying the next-best products (NBP) for users, individually. Our experiments demonstrate the superiority of the multi-task model compared to its single-task versions, reaching top-1 precision at 78.1% and 77.6%, for churn and NBP prediction respectively. Moreover, we find that the model learns a coherent and expressive high-level representation reflecting user intentions related to both tasks. There is a clear separation between users with acquisitions and users with churn. In addition, acquirers are more tightly clustered compared to the churners. The gradual separability of churning and acquiring users, who diverge in intent, is a desirable property. It provides a basis for model explainability, critical to industry adoption, and also enables other downstream applications. These potential additional benefits, beyond reducing customer attrition and increasing product use-two primary concerns of businesses, make such a model even more valuable.
引用
收藏
页码:37 / 40
页数:4
相关论文
共 50 条
  • [1] A multi-task embedding based personalized POI recommendation method
    Chen, Ling
    Ying, Yuankai
    Lyu, Dandan
    Yu, Shanshan
    Chen, Gencai
    [J]. CCF TRANSACTIONS ON PERVASIVE COMPUTING AND INTERACTION, 2021, 3 (03) : 253 - 269
  • [2] A multi-task embedding based personalized POI recommendation method
    Ling Chen
    Yuankai Ying
    Dandan Lyu
    Shanshan Yu
    Gencai Chen
    [J]. CCF Transactions on Pervasive Computing and Interaction, 2021, 3 : 253 - 269
  • [3] Multi-task Feature Learning for Social Recommendation
    Zhang, Yuanyuan
    Sun, Maosheng
    Zhang, Xiaowei
    Zhang, Yonglong
    [J]. KNOWLEDGE GRAPH AND SEMANTIC COMPUTING: KNOWLEDGE GRAPH EMPOWERS NEW INFRASTRUCTURE CONSTRUCTION, 2021, 1466 : 240 - 252
  • [4] Multi-Task Learning Based Network Embedding
    Wang, Shanfeng
    Wang, Qixiang
    Gong, Maoguo
    [J]. FRONTIERS IN NEUROSCIENCE, 2020, 13
  • [5] Service recommendation based on contrastive learning and multi-task learning
    Yu, Ting
    Zhang, Lihua
    Liu, Hailin
    Liu, Hongbing
    Wang, Jiaojiao
    [J]. COMPUTER COMMUNICATIONS, 2024, 213 : 285 - 295
  • [6] Hierarchical Aggregation Based Knowledge Graph Embedding for Multi-task Recommendation
    Wang, Yani
    Zhang, Ji
    Zhou, Xiangmin
    Zhang, Yang
    [J]. WEB AND BIG DATA, PT III, APWEB-WAIM 2022, 2023, 13423 : 174 - 181
  • [7] Multi-Task Learning with Personalized Transformer for Review Recommendation
    Wang, Haiming
    Liu, Wei
    Yin, Jian
    [J]. WEB INFORMATION SYSTEMS ENGINEERING - WISE 2021, PT II, 2021, 13081 : 162 - 176
  • [8] Attentive multi-task learning for group itinerary recommendation
    Lei Chen
    Jie Cao
    Huanhuan Chen
    Weichao Liang
    Haicheng Tao
    Guixiang Zhu
    [J]. Knowledge and Information Systems, 2021, 63 : 1687 - 1716
  • [9] A novel embedding learning framework for relation completion and recommendation based on graph neural network and multi-task learning
    Zhao, Wenbin
    Li, Yahui
    Fan, Tongrang
    Wu, Feng
    [J]. SOFT COMPUTING, 2022,
  • [10] Unified Voice Embedding through Multi-task Learning
    Rajenthiran, Jenarthanan
    Sithamaparanathan, Lakshikka
    Uthayakumar, Saranya
    Thayasivam, Uthayasanker
    [J]. 2022 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING (IALP 2022), 2022, : 178 - 183