An In-Depth Comparison of Neural and Probabilistic Tree Models for Learning-to-rank

被引:0
|
作者
Tan, Haonan [1 ]
Yang, Kaiyu [1 ]
Yu, Haitao [2 ]
机构
[1] Univ Tsukuba, Grad Sch Comprehens Human Sci, 1-2 Kasuga, Tsukuba, Ibaraki 3050821, Japan
[2] Univ Tsukuba, Inst Lilbray Informat & Media Sci, 1-2 Kasuga, Tsukuba, Ibaraki 3050821, Japan
来源
ADVANCES IN INFORMATION RETRIEVAL, ECIR 2024, PT III | 2024年 / 14610卷
关键词
Learning-to-rank; Gradient-boosted Decision Trees; Neural Tree Ensembles; Probabilistic Gradient Boosting Machines; INFORMATION-RETRIEVAL; EFFICIENT;
D O I
10.1007/978-3-031-56063-7_39
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning-to-rank has been intensively studied and has demonstrated significant value in several fields, such as web search and recommender systems. Over the learning-to-rank datasets given as vectors of feature values, LambdaMART proposed more than a decade ago, and its subsequent descendants based on gradient-boosted decision trees (GBDT), have demonstrated leading performance. Recently, different novel tree models have been developed, such as neural tree ensembles that utilize neural networks to emulate decision tree models and probabilistic gradient boosting machines (PGBM). However, the effectiveness of these tree models for learning-to-rank has not been comprehensively explored. Hence, this study bridges the gap by systematically comparing several representative neural tree ensembles (e.g., TabNet, NODE, and GANDALF), PGBM, and traditional learning-to-rank models on two benchmark datasets. The experimental results reveal that benefiting from end-to-end gradient-based optimization and the power of feature representation and adaptive feature selection, the neural tree ensemble does have its advantage for learning-to-rank over the conventional tree-based ranking model, such as LambdaMART. This finding is important as LambdaMART has achieved leading performance in a long period.
引用
收藏
页码:468 / 476
页数:9
相关论文
共 50 条
  • [1] An in-depth study on adversarial learning-to-rank
    Yu, Hai-Tao
    Piryani, Rajesh
    Jatowt, Adam
    Inagaki, Ryo
    Joho, Hideo
    Kim, Kyoung-Sook
    INFORMATION RETRIEVAL JOURNAL, 2023, 26 (01):
  • [2] An in-depth study on adversarial learning-to-rank
    Hai-Tao Yu
    Rajesh Piryani
    Adam Jatowt
    Ryo Inagaki
    Hideo Joho
    Kyoung-Sook Kim
    Information Retrieval Journal, 2023, 26
  • [3] Is Interpretable Machine Learning Effective at Feature Selection for Neural Learning-to-Rank?
    Lyu, Lijun
    Roy, Nirmal
    Oosterhuis, Harrie
    Anand, Avishek
    ADVANCES IN INFORMATION RETRIEVAL, ECIR 2024, PT IV, 2024, 14611 : 384 - 402
  • [4] The Importance of the Depth for Text-Image Selection Strategy in Learning-To-Rank
    Buffoni, David
    Tollari, Sabrina
    Gallinari, Patrick
    ADVANCES IN INFORMATION RETRIEVAL, 2011, 6611 : 743 - 746
  • [5] Deep Neural Network Regularization for Feature Selection in Learning-to-Rank
    Rahangdale, Ashwini
    Raut, Shital
    IEEE ACCESS, 2019, 7 : 53988 - 54006
  • [6] Incorporating Query-Specific Feedback into Learning-to-Rank Models
    Can, Ethem F.
    Croft, W. Bruce
    Manmatha, R.
    SIGIR'14: PROCEEDINGS OF THE 37TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2014, : 1035 - 1038
  • [7] Quality versus efficiency in document scoring with learning-to-rank models
    Capannini, Gabriele
    Lucchese, Claudio
    Nardini, Franco Maria
    Orlando, Salvatore
    Perego, Raffaele
    Tonellotto, Nicola
    INFORMATION PROCESSING & MANAGEMENT, 2016, 52 (06) : 1161 - 1177
  • [8] Towards Explainable Test Case Prioritisation with Learning-to-Rank Models
    Ramirez, Aurora
    Berrios, Mario
    Raul Romero, Jose
    Feldt, Robert
    2023 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION WORKSHOPS, ICSTW, 2023, : 66 - 69
  • [9] Combining Decision Trees and Neural Networks for Learning-to-Rank in Personal Search
    Li, Pan
    Qin, Zhen
    Wang, Xuanhui
    Metzler, Donald
    KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 2032 - 2040
  • [10] Optimizing Learning-to-Rank Models for Ex-Post Fair Relevance
    Gorantla, Sruthi
    Bhansali, Eshaan
    Deshpande, Amit
    Louis, Anand
    PROCEEDINGS OF THE 47TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2024, 2024, : 1525 - 1534