An in-depth study on adversarial learning-to-rank

被引:1
|
作者
Yu, Hai-Tao [1 ]
Piryani, Rajesh [1 ]
Jatowt, Adam [2 ,3 ]
Inagaki, Ryo [1 ,5 ]
Joho, Hideo [1 ]
Kim, Kyoung-Sook [4 ]
机构
[1] Univ Tsukuba, Fac Lib Informat & Media Sci, 1-2 Kasuga, Tsukuba, Ibaraki 3058550, Japan
[2] Univ Innsbruck, Dept Comp Sci, Innrain 52, A-6020 Innsbruck, Austria
[3] Univ Innsbruck, DiSC, Innrain 52, A-6020 Innsbruck, Austria
[4] Natl Inst Adv Ind Sci & Technol, 2-4-7 Aomi,Koto Ku, Tokyo 1350064, Japan
[5] Univ Tsukuba, Grad Sch Lib Informat & Media Studies, 1-2 Kasuga, Tsukuba, Ibaraki 3058550, Japan
来源
INFORMATION RETRIEVAL JOURNAL | 2023年 / 26卷 / 01期
关键词
Learning-to-rank; Adversarial optimization; Variational divergence minimization; Reparameterization; INFORMATION-RETRIEVAL;
D O I
10.1007/s10791-023-09419-0
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In light of recent advances in adversarial learning, there has been strong and continuing interest in exploring how to perform adversarial learning-to-rank. The previous adversarial ranking methods [e.g., IRGAN by Wang et al. (IRGAN: a minimax game for unifying generative and discriminative information retrieval models. Proceedings of the 40th SIGIR pp. 515-524, 2017)] mainly follow the generative adversarial networks (GAN) framework (Goodfellow et al. in Generative adversarial nets. Proceedings of NeurIPS pp. 2672-2680, 2014), and focus on either pointwise or pairwise optimization based on the rule-based adversarial sampling. Unfortunately, there are still many open problems. For example, how to perform listwise adversarial learning-to-rank has not been explored. Furthermore, GAN has many variants, such as f-GAN (Nowozin et al. in Proceedings of the 30th international conference on neural information processing systems, pp. 271-279, 2016) and EBGAN (Zhao et al. in Energy-based generative adversarial network. International conference on learning representations (ICLR), 2017), a natural question arises then: to what extent does the adversarial learning strategy affect the ranking performance? To cope with these problems, firstly, we show how to perform adversarial learning-to-rank in a listwise manner by following the GAN framework. Secondly, we investigate the effects of using a different adversarial learning framework, namely f-GAN. Specifically, a new general adversarial learning-to-rank framework via variational divergence minimization is proposed (referred to as IRf-GAN). Furthermore, we show how to perform pointwise, pairwise and listwise adversarial learning-to-rank within the same framework of IRf-GAN. In order to clearly understand the pros and cons of adversarial learning-to-rank, we conduct a series of experiments using multiple benchmark collections. The experimental results demonstrate that: (1) Thanks to the flexibility of being able to use different divergence functions, IRf-GAN-pair shows significantly better performance than adversarial learning-to-rank methods based on the IRGAN framework. This reveals that the learning strategy significantly affects the adversarial ranking performance. (2) An in-depth comparison with conventional ranking methods shows that although the adversarial learning-to-rank models can achieve comparable performance as conventional methods based on neural networks, they are still inferior to LambdaMART by a large margin. In particular, we pinpoint that the weakness of adversarial learning-to-rank is largely attributable to the gradient estimation based on sampled rankings which significantly diverge from ideal rankings. Careful examination of this weakness is highly recommended for developing adversarial learning-to-rank approaches.
引用
收藏
页数:24
相关论文
共 50 条
  • [1] An in-depth study on adversarial learning-to-rank
    Hai-Tao Yu
    Rajesh Piryani
    Adam Jatowt
    Ryo Inagaki
    Hideo Joho
    Kyoung-Sook Kim
    Information Retrieval Journal, 2023, 26
  • [2] An In-Depth Comparison of Neural and Probabilistic Tree Models for Learning-to-rank
    Tan, Haonan
    Yang, Kaiyu
    Yu, Haitao
    ADVANCES IN INFORMATION RETRIEVAL, ECIR 2024, PT III, 2024, 14610 : 468 - 476
  • [3] The Importance of the Depth for Text-Image Selection Strategy in Learning-To-Rank
    Buffoni, David
    Tollari, Sabrina
    Gallinari, Patrick
    ADVANCES IN INFORMATION RETRIEVAL, 2011, 6611 : 743 - 746
  • [4] Learning-to-Count by Learning-to-Rank
    D'Alessandro, Adriano C.
    Mahdavi-Amiri, Ali
    Hamarneh, Ghassan
    2023 20TH CONFERENCE ON ROBOTS AND VISION, CRV, 2023, : 105 - 112
  • [5] Learning-to-Rank with Nested Feedback
    Sagtani, Hitesh
    Jeunen, Olivier
    Ustimenko, Aleksei
    ADVANCES IN INFORMATION RETRIEVAL, ECIR 2024, PT III, 2024, 14610 : 306 - 315
  • [6] Lero: A Learning-to-Rank Query Optimizer
    Zhu, Rong
    Chen, Wei
    Ding, Bolin
    Chen, Xingguang
    Pfadler, Andreas
    Wu, Ziniu
    Zhou, Jingren
    PROCEEDINGS OF THE VLDB ENDOWMENT, 2023, 16 (06): : 1466 - 1479
  • [7] Scale-Invariant Learning-to-Rank
    Petrozziello, Alessio
    Sommeregger, Christian
    Lim, Ye-Sheen
    PROCEEDINGS OF THE EIGHTEENTH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2024, 2024, : 826 - 828
  • [8] Unbiased Learning-to-Rank with Biased Feedback
    Joachims, Thorsten
    Swaminathan, Adith
    Schnabel, Tobias
    WSDM'17: PROCEEDINGS OF THE TENTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, 2017, : 781 - 789
  • [9] A General Framework for Counterfactual Learning-to-Rank
    Agarwal, Aman
    Takatsu, Kenta
    Zaitsev, Ivan
    Joachims, Thorsten
    PROCEEDINGS OF THE 42ND INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '19), 2019, : 5 - 14
  • [10] Unbiased Learning-to-Rank with Biased Feedback
    Joachims, Thorsten
    Swaminathan, Adith
    Schnabel, Tobias
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 5284 - 5288