Learning to Rank from Noisy Data

被引:6
|
作者
Ding, Wenkui [1 ]
Geng, Xiubo [2 ]
Zhang, Xu-Dong [1 ]
机构
[1] Tsinghua Univ, Dept Elect Engn, Beijing, Peoples R China
[2] Yahoo Labs Beijing, Beijing, Peoples R China
关键词
Noisy data; robust learning;
D O I
10.1145/2576230
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning to rank, which learns the ranking function from training data, has become an emerging research area in information retrieval and machine learning. Most existing work on learning to rank assumes that the training data is clean, which is not always true, however. The ambiguity of query intent, the lack of domain knowledge, and the vague definition of relevance levels all make it difficult for common annotators to give reliable relevance labels to some documents. As a result, the relevance labels in the training data of learning to rank usually contain noise. If we ignore this fact, the performance of learning-to-rank algorithms will be damaged. In this article, we propose considering the labeling noise in the process of learning to rank and using a two-step approach to extend existing algorithms to handle noisy training data. In the first step, we estimate the degree of labeling noise for a training document. To this end, we assume that the majority of the relevance labels in the training data are reliable and we use a graphical model to describe the generative process of a training query, the feature vectors of its associated documents, and the relevance labels of these documents. The parameters in the graphical model are learned by means of maximum likelihood estimation. Then the conditional probability of the relevance label given the feature vector of a document is computed. If the probability is large, we regard the degree of labeling noise for this document as small; otherwise, we regard the degree as large. In the second step, we extend existing learning-to-rank algorithms by incorporating the estimated degree of labeling noise into their loss functions. Specifically, we give larger weights to those training documents with smaller degrees of labeling noise and smaller weights to those with larger degrees of labeling noise. As examples, we demonstrate the extensions for McRank, RankSVM, RankBoost, and RankNet. Empirical results on benchmark datasets show that the proposed approach can effectively distinguish noisy documents from clean ones, and the extended learning-to-rank algorithms can achieve better performances than baselines.
引用
收藏
页数:21
相关论文
共 50 条
  • [1] Learning to Rank From a Noisy Crowd
    Kumar, Abhimanu
    Lease, Matthew
    PROCEEDINGS OF THE 34TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR'11), 2011, : 1221 - 1222
  • [2] Learning Programs from Noisy Data
    Raychev, Veselin
    Bielik, Pavol
    Vechev, Martin
    Krause, Andreas
    ACM SIGPLAN NOTICES, 2016, 51 (01) : 761 - 774
  • [3] Learning programs from noisy data
    Raychev V.
    Bielik P.
    Vechev M.
    Krause A.
    1600, Association for Computing Machinery, 2 Penn Plaza, Suite 701, New York, NY 10121-0701, United States (51): : 761 - 774
  • [4] Learning from Noisy Data with Robust Representation Learning
    Li, Junnan
    Xiong, Caiming
    Hoi, Steven C. H.
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9465 - 9474
  • [5] Learning Explanatory Rules from Noisy Data
    Evans, Richard
    Grefenstette, Edward
    JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2018, 61 : 1 - 64
  • [6] Robust Graph Learning From Noisy Data
    Kang, Zhao
    Pan, Haiqi
    Hoi, Steven C. H.
    Xu, Zenglin
    IEEE TRANSACTIONS ON CYBERNETICS, 2020, 50 (05) : 1833 - 1843
  • [7] Learning explanatory rules from noisy data
    1600, AI Access Foundation (61):
  • [8] Learning to Learn from Noisy Labeled Data
    Li, Junnan
    Wong, Yongkang
    Zhao, Qi
    Kankanhalli, Mohan S.
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 5046 - 5054
  • [9] Learning from Noisy Similar and Dissimilar Data
    Dan, Soham
    Bao, Han
    Sugiyama, Masashi
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2021: RESEARCH TRACK, PT II, 2021, 12976 : 233 - 249
  • [10] Nonconvex Low-Rank Tensor Completion from Noisy Data
    Cai, Changxiao
    Li, Gen
    Poor, H. Vincent
    Chen, Yuxin
    OPERATIONS RESEARCH, 2022, 70 (02) : 1219 - 1237