Preference-based learning to rank

被引:2
|
作者
Nir Ailon
Mehryar Mohri
机构
[1] Computer Science Faculty,
[2] Technion – Israel Institute of Technology,undefined
[3] Courant Institute of Mathematical Sciences,undefined
来源
Machine Learning | 2010年 / 80卷
关键词
Learning to rank; Machine learning reductions; ROC;
D O I
暂无
中图分类号
学科分类号
摘要
This paper presents an efficient preference-based ranking algorithm running in two stages. In the first stage, the algorithm learns a preference function defined over pairs, as in a standard binary classification problem. In the second stage, it makes use of that preference function to produce an accurate ranking, thereby reducing the learning problem of ranking to binary classification. This reduction is based on the familiar QuickSort and guarantees an expected pairwise misranking loss of at most twice that of the binary classifier derived in the first stage. Furthermore, in the important special case of bipartite ranking, the factor of two in loss is reduced to one. This improved bound also applies to the regret achieved by our ranking and that of the binary classifier obtained.
引用
收藏
页码:189 / 211
页数:22
相关论文
共 50 条
  • [1] Preference-based learning to rank
    Ailon, Nir
    Mohri, Mehryar
    [J]. MACHINE LEARNING, 2010, 80 (2-3) : 189 - 211
  • [2] A Practical Divide-and-Conquer Approach for Preference-Based Learning to Rank
    Yang, Han-Jay
    Lin, Hsuan-Tien
    [J]. 2015 CONFERENCE ON TECHNOLOGIES AND APPLICATIONS OF ARTIFICIAL INTELLIGENCE (TAAI), 2015, : 554 - 561
  • [3] Preference-Based Policy Learning
    Akrour, Riad
    Schoenauer, Marc
    Sebag, Michele
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, PT I, 2011, 6911 : 12 - 27
  • [4] Preference-Based Policy Iteration: Leveraging Preference Learning for Reinforcement Learning
    Cheng, Weiwei
    Fuernkranz, Johannes
    Huellermeier, Eyke
    Park, Sang-Hyeun
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, PT I, 2011, 6911 : 312 - 327
  • [5] Learning state importance for preference-based reinforcement learning
    Zhang, Guoxi
    Kashima, Hisashi
    [J]. MACHINE LEARNING, 2023, 113 (4) : 1885 - 1901
  • [6] Learning state importance for preference-based reinforcement learning
    Guoxi Zhang
    Hisashi Kashima
    [J]. Machine Learning, 2024, 113 : 1885 - 1901
  • [7] Preference-based reinforcement learning: evolutionary direct policy search using a preference-based racing algorithm
    Róbert Busa-Fekete
    Balázs Szörényi
    Paul Weng
    Weiwei Cheng
    Eyke Hüllermeier
    [J]. Machine Learning, 2014, 97 : 327 - 351
  • [8] Task Transfer by Preference-Based Cost Learning
    Jing, Mingxuan
    Ma, Xiaojian
    Huang, Wenbing
    Sun, Fuchun
    Liu, Huaping
    [J]. THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 2471 - 2478
  • [9] Preference-based reinforcement learning: evolutionary direct policy search using a preference-based racing algorithm
    Busa-Fekete, Robert
    Szoerenyi, Balazs
    Weng, Paul
    Cheng, Weiwei
    Huellermeier, Eyke
    [J]. MACHINE LEARNING, 2014, 97 (03) : 327 - 351
  • [10] A Survey of Preference-Based Reinforcement Learning Methods
    Wirth, Christian
    Akrour, Riad
    Neumann, Gerhard
    Fuernkranz, Johannes
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2017, 18