Pre-trained Language Model-based Retrieval and Ranking forWeb Search

被引:4
|
作者
Zou, Lixin [1 ]
Lu, Weixue [1 ]
Liu, Yiding [1 ]
Cai, Hengyi [1 ]
Chu, Xiaokai [1 ]
Ma, Dehong [1 ]
Shi, Daiting [1 ]
Sun, Yu [1 ]
Cheng, Zhicong [1 ]
Gu, Simiu [1 ]
Wang, Shuaiqiang [1 ]
Yin, Dawei [1 ]
机构
[1] Baidu Inc, Beijing, Peoples R China
关键词
Pre-trained language model; web retrieval; ranking; ALGORITHM;
D O I
10.1145/3568681
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Pre-trained language representation models (PLMs) such as BERT and Enhanced Representation through kNowledge IntEgration (ERNIE) have been integral to achieving recent improvements on various downstream tasks, including information retrieval. However, it is nontrivial to directly utilize these models for the large-scale web search due to the following challenging issues: (1) the prohibitively expensive computations ofmassive neural PLMs, especially for long texts in the web document, prohibit their deployments in the web search system that demands extremely low latency; (2) the discrepancy between existing task-agnostic pre-training objectives and the ad hoc retrieval scenarios that demand comprehensive relevance modeling is another main barrier for improving the online retrieval and ranking effectiveness; and (3) to create a significant impact on real-world applications, it also calls for practical solutions to seamlessly interweave the resultant PLM and other components into a cooperative system to serve web-scale data. Accordingly, we contribute a series of successfully applied techniques in tackling these exposed issues in this work when deploying the state-of-the-art Chinese pre-trained language model, i.e., ERNIE, in the online search engine system. We first present novel practices to perform expressive PLM-based semantic retrieval with a flexible poly-interaction scheme and cost-efficiently contextualize and rank web documents with a cheap yet powerful Pyramid-ERNIE architecture. We then endow innovative pre-training and fine-tuning paradigms to explicitly incentivize the query-document relevance modeling in PLM-based retrieval and ranking with the large-scale noisy and biased post-click behavioral data. We also introduce a series of effective strategies to seamlessly interwoven the designed PLM-based models with other conventional components into a cooperative system. Extensive offline and online experimental results show that our proposed techniques are crucial to achieving more effective search performance. We also provide a thorough analysis of our methodology and experimental results.
引用
收藏
页数:36
相关论文
共 50 条
  • [31] Enhancing Language Generation with Effective Checkpoints of Pre-trained Language Model
    Park, Jeonghyeok
    Zhao, Hai
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 2686 - 2694
  • [32] GenDistiller: Distilling Pre-trained Language Models based on an Autoregressive Generative Model
    Gao, Yingying
    Zhang, Shilei
    Deng, Chao
    Feng, Junlan
    INTERSPEECH 2024, 2024, : 3325 - 3329
  • [33] Grammatical Error Correction by Transferring Learning Based on Pre-Trained Language Model
    Han M.
    Wang Y.
    Shanghai Jiaotong Daxue Xuebao/Journal of Shanghai Jiaotong University, 2022, 56 (11): : 1554 - 1560
  • [34] NMT Enhancement based on Knowledge Graph Mining with Pre-trained Language Model
    Yang, Hao
    Qin, Ying
    Deng, Yao
    Wang, Minghan
    2020 22ND INTERNATIONAL CONFERENCE ON ADVANCED COMMUNICATION TECHNOLOGY (ICACT): DIGITAL SECURITY GLOBAL AGENDA FOR SAFE SOCIETY!, 2020, : 185 - 189
  • [35] H-ERNIE: A Multi-Granularity Pre-Trained Language Model for Web Search
    Chu, Xiaokai
    Zhao, Jiashu
    Zou, Lixin
    Yin, Dawei
    PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22), 2022, : 1478 - 1489
  • [36] Heterogeneous data-based information retrieval using a fine-tuned pre-trained BERT language model
    Shaik, Amjan
    Saxena, Surabhi
    Gupta, Manisha
    Parveen, Nikhat
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (21) : 59537 - 59559
  • [37] Pre-trained CNNs Models for Content based Image Retrieval
    Ahmed, Ali
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2021, 12 (07) : 200 - 206
  • [38] Inference-based No-Learning Approach on Pre-trained BERT Model Retrieval
    Pham, Huu-Long
    Mibayashi, Ryota
    Yamamoto, Takehiro
    Kato, Makoto P.
    Yamamoto, Yusuke
    Shoji, Yoshiyuki
    Ohshima, Hiroaki
    2024 IEEE INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING, IEEE BIGCOMP 2024, 2024, : 234 - 241
  • [39] ReAugKD: Retrieval-Augmented Knowledge Distillation For Pre-trained Language Models
    Zhang, Jianyi
    Muhamed, Aashiq
    Anantharaman, Aditya
    Wang, Guoyin
    Chen, Changyou
    Zhong, Kai
    Cui, Qingjun
    Xu, Yi
    Zeng, Belinda
    Chilimbi, Trishul
    Chen, Yiran
    61ST CONFERENCE OF THE THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 2, 2023, : 1128 - 1136
  • [40] MCFC: A Momentum-Driven Clicked Feature Compressed Pre-trained Language Model for Information Retrieval
    Li, Dongyang
    Ding, Ruixue
    Xie, Pengjun
    He, Xiaofeng
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT I, NLPCC 2024, 2025, 15359 : 69 - 82