An in-depth study on adversarial learning-to-rank

被引:1
|
作者
Yu, Hai-Tao [1 ]
Piryani, Rajesh [1 ]
Jatowt, Adam [2 ,3 ]
Inagaki, Ryo [1 ,5 ]
Joho, Hideo [1 ]
Kim, Kyoung-Sook [4 ]
机构
[1] Univ Tsukuba, Fac Lib Informat & Media Sci, 1-2 Kasuga, Tsukuba, Ibaraki 3058550, Japan
[2] Univ Innsbruck, Dept Comp Sci, Innrain 52, A-6020 Innsbruck, Austria
[3] Univ Innsbruck, DiSC, Innrain 52, A-6020 Innsbruck, Austria
[4] Natl Inst Adv Ind Sci & Technol, 2-4-7 Aomi,Koto Ku, Tokyo 1350064, Japan
[5] Univ Tsukuba, Grad Sch Lib Informat & Media Studies, 1-2 Kasuga, Tsukuba, Ibaraki 3058550, Japan
来源
INFORMATION RETRIEVAL JOURNAL | 2023年 / 26卷 / 01期
关键词
Learning-to-rank; Adversarial optimization; Variational divergence minimization; Reparameterization; INFORMATION-RETRIEVAL;
D O I
10.1007/s10791-023-09419-0
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In light of recent advances in adversarial learning, there has been strong and continuing interest in exploring how to perform adversarial learning-to-rank. The previous adversarial ranking methods [e.g., IRGAN by Wang et al. (IRGAN: a minimax game for unifying generative and discriminative information retrieval models. Proceedings of the 40th SIGIR pp. 515-524, 2017)] mainly follow the generative adversarial networks (GAN) framework (Goodfellow et al. in Generative adversarial nets. Proceedings of NeurIPS pp. 2672-2680, 2014), and focus on either pointwise or pairwise optimization based on the rule-based adversarial sampling. Unfortunately, there are still many open problems. For example, how to perform listwise adversarial learning-to-rank has not been explored. Furthermore, GAN has many variants, such as f-GAN (Nowozin et al. in Proceedings of the 30th international conference on neural information processing systems, pp. 271-279, 2016) and EBGAN (Zhao et al. in Energy-based generative adversarial network. International conference on learning representations (ICLR), 2017), a natural question arises then: to what extent does the adversarial learning strategy affect the ranking performance? To cope with these problems, firstly, we show how to perform adversarial learning-to-rank in a listwise manner by following the GAN framework. Secondly, we investigate the effects of using a different adversarial learning framework, namely f-GAN. Specifically, a new general adversarial learning-to-rank framework via variational divergence minimization is proposed (referred to as IRf-GAN). Furthermore, we show how to perform pointwise, pairwise and listwise adversarial learning-to-rank within the same framework of IRf-GAN. In order to clearly understand the pros and cons of adversarial learning-to-rank, we conduct a series of experiments using multiple benchmark collections. The experimental results demonstrate that: (1) Thanks to the flexibility of being able to use different divergence functions, IRf-GAN-pair shows significantly better performance than adversarial learning-to-rank methods based on the IRGAN framework. This reveals that the learning strategy significantly affects the adversarial ranking performance. (2) An in-depth comparison with conventional ranking methods shows that although the adversarial learning-to-rank models can achieve comparable performance as conventional methods based on neural networks, they are still inferior to LambdaMART by a large margin. In particular, we pinpoint that the weakness of adversarial learning-to-rank is largely attributable to the gradient estimation based on sampled rankings which significantly diverge from ideal rankings. Careful examination of this weakness is highly recommended for developing adversarial learning-to-rank approaches.
引用
收藏
页数:24
相关论文
共 50 条
  • [31] Distributionally robust learning-to-rank under the Wasserstein metric
    Sotudian, Shahabeddin
    Chen, Ruidi
    Paschalidis, Ioannis Ch.
    PLOS ONE, 2023, 18 (03):
  • [32] Feature Selection for Learning-to-Rank using Simulated Annealing
    Allvi, Mustafa Wasif
    Hasan, Mahamudul
    Rayon, Lazim
    Shahabuddin, Mohammad
    Khan, Md Mosaddek
    Ibrahim, Muhammad
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2020, 11 (03) : 699 - 705
  • [33] Unbiased LambdaMART: An Unbiased Pairwise Learning-to-Rank Algorithm
    Hu, Ziniu
    Wang, Yang
    Peng, Qu
    Li, Hang
    WEB CONFERENCE 2019: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2019), 2019, : 2830 - 2836
  • [34] EGRank: An exponentiated gradient algorithm for sparse learning-to-rank
    Du, Lei
    Pan, Yan
    Ding, Jintang
    Lai, Hanjiang
    Huang, Changqin
    INFORMATION SCIENCES, 2018, 467 : 342 - 356
  • [35] An Evaluation of Learning-to-Rank Methods for Lurking Behavior Analysis
    Perna, Diego
    Tagarelli, Andrea
    PROCEEDINGS OF THE 25TH CONFERENCE ON USER MODELING, ADAPTATION AND PERSONALIZATION (UMAP'17), 2017, : 381 - 382
  • [36] Toward Understanding Privileged Features Distillation in Learning-to-Rank
    Yang, Shuo
    Sanghavi, Sujay
    Rahmanian, Holakou
    Bakus, Jan
    Vishwanathan, S. V. N.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [37] RankFormer: Listwise Learning-to-Rank Using Listwide Labels
    Buyl, Maarten
    Missault, Paul
    Sondag, Pierre-Antoine
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 3762 - 3773
  • [38] RankEval: An Evaluation and Analysis Framework for Learning-to-Rank Solutions
    Lucchese, Claudio
    Muntean, Cristina Ioana
    Nardini, Franco Maria
    Perego, Raffaele
    Trani, Salvatore
    SIGIR'17: PROCEEDINGS OF THE 40TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2017, : 1281 - 1284
  • [39] Unbiased Learning-to-Rank Needs Unconfounded Propensity Estimation
    Luo, Dan
    Zou, Lixin
    Ai, Qingyao
    Chen, Zhiyu
    Li, Chenliang
    Yin, Dawei
    Davison, Brian D.
    PROCEEDINGS OF THE 47TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2024, 2024, : 1535 - 1545
  • [40] Is Interpretable Machine Learning Effective at Feature Selection for Neural Learning-to-Rank?
    Lyu, Lijun
    Roy, Nirmal
    Oosterhuis, Harrie
    Anand, Avishek
    ADVANCES IN INFORMATION RETRIEVAL, ECIR 2024, PT IV, 2024, 14611 : 384 - 402