On the Calibration and Uncertainty of Neural Learning to Rank Models for Conversational Search

被引:0
|
作者
Penha, Gustavo [1 ]
Hauff, Claudia [1 ]
机构
[1] Delft Univ Technol, Delft, Netherlands
关键词
PRINCIPLE;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
According to the Probability Ranking Principle (PRP), ranking documents in decreasing order of their probability of relevance leads to an optimal document ranking for ad-hoc retrieval. The PRP holds when two conditions are met: [C1] the models are well calibrated, and, [C2] the probabilities of relevance are reported with certainty. We know however that deep neural networks (DNNs) are often not well calibrated and have several sources of uncertainty, and thus [C1] and [C2] might not be satisfied by neural rankers. Given the success of neural Learning to Rank (L2R) approaches-and here, especially BERT-based approaches-we first analyze under which circumstances deterministic neural rankers are calibrated for conversational search problems. Then, motivated by our findings we use two techniques to model the uncertainty of neural rankers leading to the proposed stochastic rankers, which output a predictive distribution of relevance as opposed to point estimates. Our experimental results on the ad-hoc retrieval task of conversation response ranking(1) reveal that (i) BERT-based rankers are not robustly calibrated and that stochastic BERT-based rankers yield better calibration; and (ii) uncertainty estimation is beneficial for both risk-aware neural ranking, i.e. taking into account the uncertainty when ranking documents, and for predicting unanswerable conversational contexts.
引用
收藏
页码:160 / 170
页数:11
相关论文
共 50 条
  • [21] Learning Relevant Questions for Conversational Product Search using Deep Reinforcement Learning
    Montazeralghaem, Ali
    Allan, James
    [J]. WSDM'22: PROCEEDINGS OF THE FIFTEENTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, 2022, : 746 - 754
  • [22] Children's conversational voice search as learning: a literature review
    Yi, Siqi
    Rieh, Soo Young
    [J]. INFORMATION AND LEARNING SCIENCES, 2024,
  • [23] LEARNING TO RANK: A PROGRESSIVE NEURAL NETWORK LEARNING APPROACH
    Dat Thanh Tran
    Iosifidis, Alexandros
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 8355 - 8359
  • [24] Stationary Activations for Uncertainty Calibration in Deep Learning
    Meronen, Lassi
    Irwanto, Christabella
    Solin, Arno
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [25] Neural network models for influenza forecasting with associated uncertainty using Web search activity trends
    Morris, Michael
    Hayes, Peter
    Cox, Ingemar J.
    Lampos, Vasileios
    [J]. PLOS COMPUTATIONAL BIOLOGY, 2023, 19 (08)
  • [26] Uncertainty calibration of building energy models by combining approximate Bayesian computation and machine learning algorithms
    Zhu, Chuanqi
    Tian, Wei
    Yin, Baoquan
    Li, Zhanyong
    Shi, Jiaxin
    [J]. APPLIED ENERGY, 2020, 268
  • [27] Click models inspired learning to rank
    Keyhanipour, Amir Hosein
    Oroumchian, Farhad
    [J]. INTERNATIONAL JOURNAL OF WEB INFORMATION SYSTEMS, 2021, 17 (04) : 261 - 286
  • [28] Inference Calibration of Neural Network Models
    Xie, Xiaohu
    Madankan, Reza
    Wu, Kairui
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON KNOWLEDGE GRAPH (ICKG), 2022, : 347 - 354
  • [29] Calibration of uncertainty in the active learning of machine learning force fields
    Thomas-Mitchell, Adam
    Hawe, Glenn
    Popelier, Paul L. A.
    [J]. MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2023, 4 (04):
  • [30] Calibration and Uncertainty in Neural Time-to-Event Modeling
    Chapfuwa, Paidamoyo
    Tao, Chenyang
    Li, Chunyuan
    Khan, Irfan
    Chandross, Karen J.
    Pencina, Michael J.
    Carin, Lawrence
    Henao, Ricardo
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (04) : 1666 - 1680