RankEval: Evaluation and investigation of ranking models

被引:2
|
作者
Lucchese, Claudio [1 ]
Muntean, Cristina Ioana [2 ]
Nardini, Franco Maria [2 ]
Perego, Raffaele [2 ]
Trani, Salvatore [2 ]
机构
[1] Ca Foscari Univ Venice, Venice, Italy
[2] ISTI CNR, Pisa, Italy
基金
欧盟地平线“2020”;
关键词
Learning to Rank; Evaluation; Analysis;
D O I
10.1016/j.softx.2020.100614
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
RankEval is a Python open-source tool for the analysis and evaluation of ranking models based on ensembles of decision trees. Learning-to-Rank (LtR) approaches that generate tree-ensembles are considered the most effective solution for difficult ranking tasks and several impactful LtR libraries have been developed aimed at improving ranking quality and training efficiency. However, these libraries are not very helpful in terms of hyper-parameters tuning and in-depth analysis of the learned models, and even the implementation of most popular Information Retrieval (IR) metrics differ among them, thus making difficult to compare different models. RankEval overcomes these limitations by providing a unified environment where to perform an easy, comprehensive inspection and assessment of ranking models trained using different machine learning libraries. The tool focuses on ensuring efficiency, flexibility and extensibility and is fully interoperable with most popular LtR libraries. (C) 2020 The Author(s). Published by Elsevier B.V.
引用
收藏
页数:5
相关论文
共 50 条
  • [1] Offline Evaluation of Ranking Policies with Click Models
    Li, Shuai
    Abbasi-Yadkori, Yasin
    Kveton, Branislav
    Muthukrishnan, S.
    Vinay, Vishwa
    Wen, Zheng
    [J]. KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, : 1685 - 1694
  • [2] EVALUATION OF GRADE CROSSING HAZARD RANKING MODELS
    Sperry, Benjamin R.
    Naik, Bhaven
    Warner, Jeffery E.
    [J]. PROCEEDINGS OF THE ASME JOINT RAIL CONFERENCE, 2017, 2017,
  • [3] Ranking-based evaluation of regression models
    Saharon Rosset
    Claudia Perlich
    Bianca Zadrozny
    [J]. Knowledge and Information Systems, 2007, 12 : 331 - 353
  • [4] Counterfactual Ranking Evaluation with Flexible Click Models
    Buchholz, Alexander
    London, Ben
    Di Benedetto, Giuseppe
    Lichtenberg, Jan Malte
    Stein, Yannik
    Joachims, Thorsten
    [J]. PROCEEDINGS OF THE 47TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2024, 2024, : 1200 - 1210
  • [5] Ranking-based evaluation of regression models
    Rosset, Saharon
    Perlich, Claudia
    Zadrozny, Bianca
    [J]. KNOWLEDGE AND INFORMATION SYSTEMS, 2007, 12 (03) : 331 - 353
  • [6] Ranking-based evaluation of regression models
    Rosset, S
    Perlich, C
    Zadrozny, B
    [J]. Fifth IEEE International Conference on Data Mining, Proceedings, 2005, : 370 - 377
  • [7] RankEval: An Evaluation and Analysis Framework for Learning-to-Rank Solutions
    Lucchese, Claudio
    Muntean, Cristina Ioana
    Nardini, Franco Maria
    Perego, Raffaele
    Trani, Salvatore
    [J]. SIGIR'17: PROCEEDINGS OF THE 40TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2017, : 1281 - 1284
  • [8] CLASS RANKING MODELS FOR DEANS LETTERS AND THEIR PSYCHOMETRIC EVALUATION
    BLACKLOW, RS
    GOEPP, CE
    HOJAT, M
    [J]. ACADEMIC MEDICINE, 1991, 66 (09) : S10 - S12
  • [9] Framing Image Description as a Ranking Task: Data, Models and Evaluation Metrics
    Hodosh, Micah
    Young, Peter
    Hockenmaier, Julia
    [J]. JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2013, 47 : 853 - 899
  • [10] External Evaluation of Ranking Models under Extreme Position-Bias
    Fairstein, Yaron
    Haramaty, Elad
    Lazerson, Arnon
    Lewin-Eytan, Liane
    [J]. WSDM'22: PROCEEDINGS OF THE FIFTEENTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, 2022, : 252 - 261