SetRank: Learning a Permutation-Invariant Ranking Model for Information Retrieval

被引:54
|
作者
Pang, Liang [1 ]
Xu, Jun [2 ,3 ]
Ai, Qingyao [4 ]
Lan, Yanyan [1 ]
Cheng, Xueqi [1 ]
Wen, Jirong [2 ,3 ]
机构
[1] Chinese Acad Sci, Inst Comp Technol, CAS Key Lab Network Data Sci & Technol, Beijing, Peoples R China
[2] Renmin Univ China, Gaoling Sch Artificial Intelligence, Beijing, Peoples R China
[3] Beijing Key Lab Big Data Management & Anal Method, Beijing, Peoples R China
[4] Univ Utah, Salt Lake City, UT 84112 USA
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Learning to rank; permutation-invariant ranking model; PRINCIPLE;
D O I
10.1145/3397271.3401104
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In learning-to-rank for information retrieval, a ranking model is automatically learned from the data and then utilized to rank the sets of retrieved documents. Therefore, an ideal ranking model would be a mapping from a document set to a permutation on the set, and should satisfy two critical requirements: (1) it should have the ability to model cross-document interactions so as to capture local context information in a query; (2) it should be permutation-invariant, which means that any permutation of the inputted documents would not change the output ranking. Previous studies on learning-to-rank either design uni-variate scoring functions that score each document separately, and thus failed to model the cross-document interactions; or construct multivariate scoring functions that score documents sequentially, which inevitably sacrifice the permutation invariance requirement. In this paper, we propose a neural learning-to-rank model called SetRank which directly learns a permutation invariant ranking model defined on document sets of any size. SetRank employs a stack of (induced) multi-head self attention blocks as its key component for learning the embeddings for all of the retrieved documents jointly. The self-attention mechanism not only helps SetRank to capture the local context information from cross-document interactions, but also to learn permutation-equivariant representations for the inputted documents, which therefore achieving a permutation-invariant ranking model. Experimental results on three benchmarks showed that the SetRank significantly outperformed the baselines include the traditional learning-to-rank models and state-of-the-art Neural IR models.
引用
收藏
页码:499 / 508
页数:10
相关论文
共 50 条
  • [1] Agnostic Learning in Permutation-Invariant Domains
    Wimmer, Karl
    [J]. ACM TRANSACTIONS ON ALGORITHMS, 2016, 12 (04)
  • [2] Permutation-invariant linear classifiers
    Lausser, Ludwig
    Szekely, Robin
    Kestler, Hans A.
    [J]. MACHINE LEARNING, 2024,
  • [3] Permutation-invariant quantum codes
    Ouyang, Yingkai
    [J]. PHYSICAL REVIEW A, 2014, 90 (06):
  • [4] Learning Permutation-Invariant Embeddings for Description Logic Concepts
    Demir, Caglar
    Ngomo, Axel-Cyrille Ngonga
    [J]. ADVANCES IN INTELLIGENT DATA ANALYSIS XXI, IDA 2023, 2023, 13876 : 103 - 115
  • [5] Thread Popularity Prediction and Tracking with a Permutation-invariant Model
    Chan, Hou Pong
    King, Irwin
    [J]. 2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), 2018, : 3392 - 3401
  • [6] Risk Bounds for Learning Multiple Components with Permutation-Invariant Losses
    Lauer, Fabien
    [J]. INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 1178 - 1186
  • [7] The Sensory Neuron as a Transformer: Permutation-Invariant Neural Networks for Reinforcement Learning
    Tang, Yujin
    Ha, David
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,
  • [8] Path planning for permutation-invariant multirobot formations
    Kloder, Stephen
    Hutchinson, Seth
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2006, 22 (04) : 650 - 665
  • [9] Permutation-invariant qudit codes from polynomials
    Ouyang, Yingkai
    [J]. LINEAR ALGEBRA AND ITS APPLICATIONS, 2017, 532 : 43 - 59
  • [10] Permutation-Invariant Variational Autoencoder for Graph-Level Representation Learning
    Winter, Robin
    Noe, Frank
    Clevert, Djork-Arne
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34