A Multi-Task Learning Approach for Delayed Feedback Modeling

被引:5
|
作者
Huangfu, Zhigang [1 ]
Zhang, Gong-Duo [1 ]
Wu, Zhengwei [1 ]
Wu, Qintong [1 ]
Zhang, Zhiqiang [1 ]
Gu, Lihong [1 ]
Zhou, Jun [1 ]
Gu, Jinjie [1 ]
机构
[1] Ant Grp, Beijing, Peoples R China
关键词
delayed Feedback; recommender system; conversion rate prediction;
D O I
10.1145/3487553.3524217
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Conversion rate (CVR) prediction is one of the most essential tasks for digital display advertising. In industrial recommender systems, online learning is particularly favored for its capability to capture the dynamic change of data distribution, which often leads to significantly improvement of conversion rates. However, the gap between a click behavior and the corresponding conversion ranges from a few minutes to days; therefore, fresh data may not have accurate label information when they are ingested by the training algorithm, which is called the delayed feedback problem of CVR prediction. To solve this problem, previous works label the delayed positive samples as negative and correct them at their conversion time, then they optimize the expectation of actual conversion distribution via important sampling under the observed distribution. However, these methods approximate the actual feature distribution as the observed feature distribution, which may introduce additional bias to the delayed feedback modeling. In this paper, we prove the observed conversion rate is the product of the actual conversion rate and the observed non-delayed positive rate. Then we propose Multi-Task Delayed Feedback Model (MTDFM), which consists of two sub-networks: actual CVR network and NDPR (non-delayed positive rate) network. We train the actual CVR network by simultaneously optimizing the observed conversion rate and non-delayed positive rate. The proposed method does not require the observed feature distribution to remain the same as the actual distribution. Finally, experimental results on both public and industrial datasets demonstrate that the proposed method outperforms the previous state-of-the-art methods consistently.
引用
收藏
页码:116 / 120
页数:5
相关论文
共 50 条
  • [1] Conversion Prediction with Delayed Feedback: A Multi-task Learning Approach
    Hou, Yilin
    Zhao, Guangming
    Liu, Chuanren
    Zu, Zhonglin
    Zhu, Xiaoqiang
    [J]. 2021 21ST IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2021), 2021, : 191 - 199
  • [2] A Multi-Task Learning Approach to Personalized Progression Modeling
    Ghalwash, Mohamed
    Dow, Daby
    [J]. 2020 8TH IEEE INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI 2020), 2020, : 92 - 100
  • [3] Modeling Trajectories with Multi-task Learning
    Liu, Kaijun
    Ruan, Sijie
    Xu, Qianxiong
    Long, Cheng
    Xiao, Nan
    Hu, Nan
    Yu, Liang
    Pan, Sinno Jialin
    [J]. 2022 23RD IEEE INTERNATIONAL CONFERENCE ON MOBILE DATA MANAGEMENT (MDM 2022), 2022, : 208 - 213
  • [4] Multi-Task Learning for Coherence Modeling
    Farag, Youmna
    Yannakoudakis, Helen
    [J]. 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 629 - 639
  • [5] Generative Modeling for Multi-task Visual Learning
    Bao, Zhipeng
    Hebert, Martial
    Wang, Yu-Xiong
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [6] Meta Multi-Task Learning for Sequence Modeling
    Chen, Junkun
    Qiu, Xipeng
    Liu, Pengfei
    Huang, Xuanjing
    [J]. THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 5070 - 5077
  • [7] A Multi-task Approach to Learning Multilingual Representations
    Singla, Karan
    Can, Dogan
    Narayanan, Shrikanth
    [J]. PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 2, 2018, : 214 - 220
  • [8] A Multi-task Learning Approach for Image Captioning
    Zhao, Wei
    Wang, Benyou
    Ye, Jianbo
    Yang, Min
    Zhao, Zhou
    Luo, Ruotian
    Qiao, Yu
    [J]. PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 1205 - 1211
  • [9] Multi-task gradient descent for multi-task learning
    Lu Bai
    Yew-Soon Ong
    Tiantian He
    Abhishek Gupta
    [J]. Memetic Computing, 2020, 12 : 355 - 369
  • [10] Multi-task gradient descent for multi-task learning
    Bai, Lu
    Ong, Yew-Soon
    He, Tiantian
    Gupta, Abhishek
    [J]. MEMETIC COMPUTING, 2020, 12 (04) : 355 - 369