Leaving No One Behind: A Multi-Scenario Multi-Task Meta Learning Approach for Advertiser Modeling

被引:24
|
作者
Zhang, Qianqian [1 ]
Liao, Xinru [1 ]
Liu, Quan [1 ]
Xu, Jian [1 ]
Zheng, Bo [1 ]
机构
[1] Alibaba Grp, Hangzhou, Peoples R China
关键词
Advertiser Modeling; Multi-Task Learning; Meta Learning; Multi-Behavior Learning; NETWORK;
D O I
10.1145/3488560.3498479
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Advertisers play an essential role in many e-commerce platforms like Taobao and Amazon. Fulfilling their marketing needs and supporting their business growth is critical to the long-term prosperity of platform economies. However, compared with extensive studies on user modeling such as click-through rate predictions, much less attention has been drawn to advertisers, especially in terms of understanding their diverse demands and performance. Different from user modeling, advertiser modeling generally involves many kinds of tasks (e.g. predictions of advertisers' expenditure, active-rate, or total impressions of promoted products). In addition, major e-commerce platforms often provide multiple marketing scenarios (e.g. Sponsored Search, Display Ads, Live Streaming Ads) while advertisers' behavior tend to be dispersed among many of them. This raises the necessity of multi-task and multi-scenario consideration in comprehensive advertiser modeling, which faces the following challenges: First, one model per scenario or per task simply doesn't scale; Second, it is particularly hard to model new or minor scenarios with limited data samples; Third, inter-scenario correlations are complicated, and may vary given different tasks. To tackle these challenges, we propose a multi-scenario multitask meta learning approach (M2M) which simultaneously predicts multiple tasks in multiple advertising scenarios. Specifically, we introduce a novel meta unit that incorporates rich scenario knowledge to learn explicit inter-scenario correlations and can easily scale to new scenarios. Furthermore, we present a meta attention module to capture diverse inter-scenario correlations given different tasks, and a meta tower module to enhance the capability of capturing the representation of scenario-specific features. Compelling results from both offline evaluation and online A/B tests demonstrate the superiority of M2M over state-of-the-art methods.
引用
收藏
页码:1368 / 1376
页数:9
相关论文
共 50 条
  • [41] Federated Multi-Task Learning
    Smith, Virginia
    Chiang, Chao-Kai
    Sanjabi, Maziar
    Talwalkar, Ameet
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [42] Pareto Multi-Task Learning
    Lin, Xi
    Zhen, Hui-Ling
    Li, Zhenhua
    Zhang, Qingfu
    Kwong, Sam
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [43] Asynchronous Multi-Task Learning
    Baytas, Inci M.
    Yan, Ming
    Jain, Anil K.
    Zhou, Jiayu
    [J]. 2016 IEEE 16TH INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2016, : 11 - 20
  • [44] Calibrated Multi-Task Learning
    Nie, Feiping
    Hu, Zhanxuan
    Li, Xuelong
    [J]. KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, : 2012 - 2021
  • [45] Parallel Multi-Task Learning
    Zhang, Yu
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2015, : 629 - 638
  • [46] Boosted multi-task learning
    Chapelle, Olivier
    Shivaswamy, Pannagadatta
    Vadrevu, Srinivas
    Weinberger, Kilian
    Zhang, Ya
    Tseng, Belle
    [J]. MACHINE LEARNING, 2011, 85 (1-2) : 149 - 173
  • [47] Distributed Multi-Task Learning
    Wang, Jialei
    Kolar, Mladen
    Srebro, Nathan
    [J]. ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 51, 2016, 51 : 751 - 760
  • [48] An overview of multi-task learning
    Yu Zhang
    Qiang Yang
    [J]. National Science Review, 2018, 5 (01) : 30 - 43
  • [49] Survey of Multi-Task Learning
    Zhang Y.
    Liu J.-W.
    Zuo X.
    [J]. 1600, Science Press (43): : 1340 - 1378
  • [50] A Survey on Multi-Task Learning
    Zhang, Yu
    Yang, Qiang
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2022, 34 (12) : 5586 - 5609