An in-depth study on adversarial learning-to-rank

被引:1
|
作者
Yu, Hai-Tao [1 ]
Piryani, Rajesh [1 ]
Jatowt, Adam [2 ,3 ]
Inagaki, Ryo [1 ,5 ]
Joho, Hideo [1 ]
Kim, Kyoung-Sook [4 ]
机构
[1] Univ Tsukuba, Fac Lib Informat & Media Sci, 1-2 Kasuga, Tsukuba, Ibaraki 3058550, Japan
[2] Univ Innsbruck, Dept Comp Sci, Innrain 52, A-6020 Innsbruck, Austria
[3] Univ Innsbruck, DiSC, Innrain 52, A-6020 Innsbruck, Austria
[4] Natl Inst Adv Ind Sci & Technol, 2-4-7 Aomi,Koto Ku, Tokyo 1350064, Japan
[5] Univ Tsukuba, Grad Sch Lib Informat & Media Studies, 1-2 Kasuga, Tsukuba, Ibaraki 3058550, Japan
来源
INFORMATION RETRIEVAL JOURNAL | 2023年 / 26卷 / 01期
关键词
Learning-to-rank; Adversarial optimization; Variational divergence minimization; Reparameterization; INFORMATION-RETRIEVAL;
D O I
10.1007/s10791-023-09419-0
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In light of recent advances in adversarial learning, there has been strong and continuing interest in exploring how to perform adversarial learning-to-rank. The previous adversarial ranking methods [e.g., IRGAN by Wang et al. (IRGAN: a minimax game for unifying generative and discriminative information retrieval models. Proceedings of the 40th SIGIR pp. 515-524, 2017)] mainly follow the generative adversarial networks (GAN) framework (Goodfellow et al. in Generative adversarial nets. Proceedings of NeurIPS pp. 2672-2680, 2014), and focus on either pointwise or pairwise optimization based on the rule-based adversarial sampling. Unfortunately, there are still many open problems. For example, how to perform listwise adversarial learning-to-rank has not been explored. Furthermore, GAN has many variants, such as f-GAN (Nowozin et al. in Proceedings of the 30th international conference on neural information processing systems, pp. 271-279, 2016) and EBGAN (Zhao et al. in Energy-based generative adversarial network. International conference on learning representations (ICLR), 2017), a natural question arises then: to what extent does the adversarial learning strategy affect the ranking performance? To cope with these problems, firstly, we show how to perform adversarial learning-to-rank in a listwise manner by following the GAN framework. Secondly, we investigate the effects of using a different adversarial learning framework, namely f-GAN. Specifically, a new general adversarial learning-to-rank framework via variational divergence minimization is proposed (referred to as IRf-GAN). Furthermore, we show how to perform pointwise, pairwise and listwise adversarial learning-to-rank within the same framework of IRf-GAN. In order to clearly understand the pros and cons of adversarial learning-to-rank, we conduct a series of experiments using multiple benchmark collections. The experimental results demonstrate that: (1) Thanks to the flexibility of being able to use different divergence functions, IRf-GAN-pair shows significantly better performance than adversarial learning-to-rank methods based on the IRGAN framework. This reveals that the learning strategy significantly affects the adversarial ranking performance. (2) An in-depth comparison with conventional ranking methods shows that although the adversarial learning-to-rank models can achieve comparable performance as conventional methods based on neural networks, they are still inferior to LambdaMART by a large margin. In particular, we pinpoint that the weakness of adversarial learning-to-rank is largely attributable to the gradient estimation based on sampled rankings which significantly diverge from ideal rankings. Careful examination of this weakness is highly recommended for developing adversarial learning-to-rank approaches.
引用
收藏
页数:24
相关论文
共 50 条
  • [21] A learning-to-rank method for information updating task
    Minh Quang Nhat Pham
    Minh Le Nguyen
    Bach Xuan Ngo
    Akira Shimazu
    Applied Intelligence, 2012, 37 : 499 - 510
  • [22] On the Suitability of Diversity Metrics for Learning-to-Rank for Diversity
    Santos, Rodrygo L. T.
    Macdonald, Craig
    Ounis, Iadh
    PROCEEDINGS OF THE 34TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR'11), 2011, : 1185 - 1186
  • [23] A learning-to-rank method for information updating task
    Minh Quang Nhat Pham
    Minh Le Nguyen
    Bach Xuan Ngo
    Shimazu, Akira
    APPLIED INTELLIGENCE, 2012, 37 (04) : 499 - 510
  • [24] Controlling Popularity Bias in Learning-to-Rank Recommendation
    Abdollahpouri, Himan
    Burke, Robin
    Mobasher, Bamshad
    PROCEEDINGS OF THE ELEVENTH ACM CONFERENCE ON RECOMMENDER SYSTEMS (RECSYS'17), 2017, : 42 - 46
  • [25] Addressing Trust Bias for Unbiased Learning-to-Rank
    Agarwal, Aman
    Wang, Xuanhui
    Li, Cheng
    Bendersky, Mike
    Najork, Marc
    WEB CONFERENCE 2019: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2019), 2019, : 4 - 14
  • [26] Rax: Composable Learning-to-Rank Using JAX
    Jagerman, Rolf
    Wang, Xuanhui
    Zhuang, Honglei
    Qin, Zhen
    Bendersky, Michael
    Najork, Marc
    Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2022, : 3051 - 3060
  • [27] RANKING AUTHORS WITH LEARNING-TO-RANK TOPIC MODELING
    Yang, Zaihan
    Hong, Liangjie
    Yin, Dawei
    Davison, Brian D.
    INTERNATIONAL JOURNAL OF INNOVATIVE COMPUTING INFORMATION AND CONTROL, 2015, 11 (04): : 1295 - 1316
  • [28] Rax: Composable Learning-to-Rank using JAX
    Jagerman, Rolf
    Wang, Xuanhui
    Zhuang, Honglei
    Qin, Zhen
    Bendersky, Michael
    Najork, Marc
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 3051 - 3060
  • [29] Controlling Fairness and Bias in Dynamic Learning-to-Rank
    Morik, Marco
    Singh, Ashudeep
    Hong, Jessica
    Joachims, Thorsten
    PROCEEDINGS OF THE 43RD INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '20), 2020, : 429 - 438
  • [30] Cross-Silo Federated Learning-to-Rank
    Shi D.-Y.
    Wang Y.-S.
    Zheng P.-F.
    Tong Y.-X.
    Ruan Jian Xue Bao/Journal of Software, 2021, 32 (03): : 669 - 688