On the Calibration and Uncertainty of Neural Learning to Rank Models for Conversational Search

被引:0
|
作者
Penha, Gustavo [1 ]
Hauff, Claudia [1 ]
机构
[1] Delft Univ Technol, Delft, Netherlands
关键词
PRINCIPLE;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
According to the Probability Ranking Principle (PRP), ranking documents in decreasing order of their probability of relevance leads to an optimal document ranking for ad-hoc retrieval. The PRP holds when two conditions are met: [C1] the models are well calibrated, and, [C2] the probabilities of relevance are reported with certainty. We know however that deep neural networks (DNNs) are often not well calibrated and have several sources of uncertainty, and thus [C1] and [C2] might not be satisfied by neural rankers. Given the success of neural Learning to Rank (L2R) approaches-and here, especially BERT-based approaches-we first analyze under which circumstances deterministic neural rankers are calibrated for conversational search problems. Then, motivated by our findings we use two techniques to model the uncertainty of neural rankers leading to the proposed stochastic rankers, which output a predictive distribution of relevance as opposed to point estimates. Our experimental results on the ad-hoc retrieval task of conversation response ranking(1) reveal that (i) BERT-based rankers are not robustly calibrated and that stochastic BERT-based rankers yield better calibration; and (ii) uncertainty estimation is beneficial for both risk-aware neural ranking, i.e. taking into account the uncertainty when ranking documents, and for predicting unanswerable conversational contexts.
引用
收藏
页码:160 / 170
页数:11
相关论文
共 50 条
  • [1] Exploring Personalized Neural Conversational Models
    Kottur, Satwik
    Wang, Xiaoyu
    Carvalho, Vitor
    [J]. PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 3728 - 3734
  • [2] Enhancing Conversational Search with Large Language Models
    Rocchietti, Guido
    Muntean, Cristina Ioana
    Nardini, Franco Maria
    [J]. ERCIM NEWS, 2024, (136): : 33 - 34
  • [3] Towards Building Economic Models of Conversational Search
    Azzopardi, Leif
    Aliannejadi, Mohammad
    Kanoulas, Evangelos
    [J]. ADVANCES IN INFORMATION RETRIEVAL, PT II, 2022, 13186 : 31 - 38
  • [4] Combining Decision Trees and Neural Networks for Learning-to-Rank in Personal Search
    Li, Pan
    Qin, Zhen
    Wang, Xuanhui
    Metzler, Donald
    [J]. KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 2032 - 2040
  • [5] Learning to Relate to Previous Turns in Conversational Search
    Mo, Fengran
    Nie, Jian-Yun
    Huang, Kaiyu
    Mao, Kelong
    Zhu, Yutao
    Li, Peng
    Liu, Yang
    [J]. PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 1722 - 1732
  • [6] Learning to Ask: Conversational Product Search via Representation Learning
    Zou, Jie
    Huang, Jimmy
    Ren, Zhaochun
    Kanoulas, Evangelos
    [J]. ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2023, 41 (02)
  • [7] Probing Neural Dialog Models for Conversational Understanding
    Saleh, Abdelrhman
    Deutsch, Tovly
    Casper, Stephen
    Belinkov, Yonatan
    Shieber, Stuart
    [J]. NLP FOR CONVERSATIONAL AI, 2020, : 132 - 143
  • [8] Learning to Rank for Educational Search Engines
    Usta, Arif
    Altingovde, Ismail Sengor
    Ozcan, Rifat
    Ulusoy, Ozgur
    [J]. IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES, 2021, 14 (02): : 211 - 225
  • [9] An In-Depth Comparison of Neural and Probabilistic Tree Models for Learning-to-rank
    Tan, Haonan
    Yang, Kaiyu
    Yu, Haitao
    [J]. ADVANCES IN INFORMATION RETRIEVAL, ECIR 2024, PT III, 2024, 14610 : 468 - 476
  • [10] Controlling the Risk of Conversational Search via Reinforcement Learning
    Wang, Zhenduo
    Ai, Qingyao
    [J]. PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE 2021 (WWW 2021), 2021, : 1968 - 1977