Multi-Faceted Knowledge-Driven Pre-Training for Product Representation Learning

被引:2
|
作者
Zhang, Denghui [1 ]
Liu, Yanchi [4 ]
Yuan, Zixuan [2 ]
Fu, Yanjie [5 ]
Chen, Haifeng
Xiong, Hui [3 ]
机构
[1] Rutgers State Univ, Informat Syst Dept, Newark, NJ 07103 USA
[2] Rutgers State Univ, Management Sci & Informat Syst Dept, Newark, NJ 07103 USA
[3] Rutgers State Univ, Newark, NJ 07103 USA
[4] NEC Labs Amer, Princeton, NJ 08540 USA
[5] Univ Cent Florida, Dept Comp Sci, Orlando, FL 32816 USA
基金
美国国家科学基金会;
关键词
Task analysis; Monitoring; Semantics; Pediatrics; Representation learning; Electronic publishing; Electronic commerce; Product representation learning; product search; product matching; product classification; pre-trained language models;
D O I
10.1109/TKDE.2022.3200921
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As a key component of e-commerce computing, product representation learning (PRL) provides benefits for a variety of applications, including product matching, search, and categorization. The existing PRL approaches have poor language understanding ability due to their inability to capture contextualized semantics. In addition, the learned representations by existing methods are not easily transferable to new products. Inspired by the recent advance of pre-trained language models (PLMs), we make the attempt to adapt PLMs for PRL to mitigate the above issues. In this article, we develop KINDLE, a Knowledge-drIven pre-trainiNg framework for proDuct representation LEarning, which can preserve the contextual semantics and multi-faceted product knowledge robustly and flexibly. Specifically, we first extend traditional one-stage pre-training to a two-stage pre-training framework, and exploit a deliberate knowledge encoder to ensure a smooth knowledge fusion into PLM. In addition, we propose a multi-objective heterogeneous embedding method to represent thousands of knowledge elements. This helps KINDLE calibrate knowledge noise and sparsity automatically by replacing isolated classes as training targets in knowledge acquisition tasks. Furthermore, an input-aware gating network is proposed to select the most relevant knowledge for different downstream tasks. Finally, extensive experiments have demonstrated the advantages of KINDLE over the state-of-the-art baselines across three downstream tasks.
引用
收藏
页码:7239 / 7250
页数:12
相关论文
共 50 条
  • [41] KNOWLEDGE TRANSLATION TO SUPPORT OLDER DRIVERS: A MULTI-FACETED PROCESS
    不详
    GERONTOLOGIST, 2010, 50 : 56 - 56
  • [42] Pre-training for Spoken Language Understanding with Joint Textual and Phonetic Representation Learning
    Chen, Qian
    Wang, Wen
    Zhang, Qinglin
    INTERSPEECH 2021, 2021, : 1244 - 1248
  • [43] Quality, not quantity: a multi-faceted core surgical training collaboration
    Hamdan, M.
    Vaughan-Shaw, P. G.
    Pearson, K. L.
    BRITISH JOURNAL OF SURGERY, 2011, 98 : 70 - 70
  • [44] Contrastive Pre-training with Adversarial Perturbations for Check-in Sequence Representation Learning
    Gong, Letian
    Lin, Youfang
    Guo, Shengnan
    Lin, Yan
    Wang, Tianyi
    Zheng, Erwen
    Zhou, Zeyu
    Wan, Huaiyu
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 4, 2023, : 4276 - 4283
  • [45] Multi-Task Collaborative Pre-Training and Adaptive Token Selection: A Unified Framework for Brain Representation Learning
    Jiang, Ning
    Wang, Gongshu
    Ye, Chuyang
    Liu, Tiantian
    Yan, Tianyi
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2024, 28 (09) : 5528 - 5539
  • [46] Sparse Representation of Electrodermal Activity With Knowledge-Driven Dictionaries
    Chaspari, Theodora
    Tsiartas, Andreas
    Stein, Leah I.
    Cermak, Sharon A.
    Narayanan, Shrikanth S.
    IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2015, 62 (03) : 960 - 971
  • [47] Multi-Faceted Hierarchical Multi-Task Learning for Recommender Systems
    Liu, Junning
    Li, Xinjian
    An, Bo
    Xia, Zijie
    Wang, Xu
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 3332 - 3341
  • [48] Robot Learning with Sensorimotor Pre-training
    Radosavovic, Ilija
    Shi, Baifeng
    Fu, Letian
    Goldberg, Ken
    Darrell, Trevor
    Malik, Jitendra
    CONFERENCE ON ROBOT LEARNING, VOL 229, 2023, 229
  • [49] Knowledge-Driven Self-Supervised Representation Learning for Facial Action Unit Recognition
    Chang, Yanan
    Wang, Shangfei
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 20385 - 20394
  • [50] A Multi-view Molecular Pre-training with Generative Contrastive Learning
    Liu, Yunwu
    Zhang, Ruisheng
    Yuan, Yongna
    Ma, Jun
    Li, Tongfeng
    Yu, Zhixuan
    INTERDISCIPLINARY SCIENCES-COMPUTATIONAL LIFE SCIENCES, 2024, 16 (03) : 741 - 754