An Attention-Based Interactive Learning-to-Rank Model for Document Retrieval

被引:1
|
作者
Zhang, Fan [1 ]
Chen, Wenyu [1 ]
Fu, Mingsheng [1 ]
Li, Fan [1 ]
Qu, Hong [1 ]
Yi, Zhang [2 ]
机构
[1] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 610054, Peoples R China
[2] Sichuan Univ, Coll Comp Sci, Chengdu 610017, Peoples R China
基金
美国国家科学基金会; 中国博士后科学基金;
关键词
Atmospheric modeling; Markov processes; Testing; Tablet computers; Training; Fans; Computational modeling; Document retrieval; interactive learning-to-rank (LTR); reinforcement learning; ALGORITHM;
D O I
10.1109/TSMC.2021.3129839
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The core issue of learning-to-rank (LTR) for document retrieval lies in finding an optimal ranking policy to meet the search intent of the user. The majority of proposed LTR approaches treat the ranking as a static process, employing a fixed ranking policy to immediately assign scores to documents. By contrast, ranking is not a static but an interactive process where the user continues interacting with the document retrieval system through information exchange such as search intent (e.g., rating or clicking for the retrieved items). We model the interactive ranking process (IRP), and propose an Attention-Based Interactive LTR model (AIRank) to constitute an intent-aware flexible ranking policy to gratify the user's need. To enhance the ranking quality, the inherent relations among documents are procured by the self-attention method to contribute to an enriched user intent representation. Furthermore, we mend the policy gradient learning method to train the AIRank in the IRP. Experiments demonstrate the effectiveness of AIRank compared to the state-of-the-art methods in terms of normalized discounted cumulative gain and expected reciprocal rank.
引用
收藏
页码:5770 / 5782
页数:13
相关论文
共 50 条
  • [1] Fast Attention-based Learning-To-Rank Model for Structured Map Search
    Zhang, Chiqun
    Evans, Michael R.
    Lepikhin, Max
    Yankov, Dragomir
    [J]. SIGIR '21 - PROCEEDINGS OF THE 44TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2021, : 942 - 951
  • [2] Attention-Based Document Classifier Learning
    Buscher, Georg
    Dengel, Andreas
    [J]. PROCEEDINGS OF THE 8TH IAPR INTERNATIONAL WORKSHOP ON DOCUMENT ANALYSIS SYSTEMS, 2008, : 87 - +
  • [3] Document Selection Methodologies for Efficient and Effective Learning-to-Rank
    Aslam, Javed A.
    Kanoulas, Evangelos
    Pavlu, Virgil
    Savev, Stefan
    Yilmaz, Emine
    [J]. PROCEEDINGS 32ND ANNUAL INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2009, : 468 - 475
  • [4] An Interactive Attention-Based Approach to Document-Level Relationship Extraction
    Mei, Zhang
    Zhongyuan, Zhao
    Zhitong, Xu
    [J]. International Journal of Advanced Computer Science and Applications, 2024, 15 (10) : 294 - 301
  • [5] Quality versus efficiency in document scoring with learning-to-rank models
    Capannini, Gabriele
    Lucchese, Claudio
    Nardini, Franco Maria
    Orlando, Salvatore
    Perego, Raffaele
    Tonellotto, Nicola
    [J]. INFORMATION PROCESSING & MANAGEMENT, 2016, 52 (06) : 1161 - 1177
  • [6] Attention-based Hierarchical LSTM Model for Document Sentiment Classification
    Wang, Bo
    Fan, Binwen
    [J]. 2018 2ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE APPLICATIONS AND TECHNOLOGIES (AIAAT 2018), 2018, 435
  • [7] A proposed attention-based model for spatial memory formation and retrieval
    Çağatay Soyer
    [J]. Cognitive Processing, 2023, 24 : 199 - 212
  • [8] SVM-based interactive document retrieval with active learning
    Onoda, Takashi
    Murata, Hiroshi
    Yamada, Seiji
    [J]. NEW GENERATION COMPUTING, 2008, 26 (01) : 49 - 61
  • [9] SVM-based interactive document retrieval with active learning
    Onoda T.
    Murata H.
    Yamada S.
    [J]. New Generation Computing, 2007, 26 (1) : 49 - 61
  • [10] Attention-based learning
    Kasderidis, S
    Taylor, JG
    [J]. 2004 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-4, PROCEEDINGS, 2004, : 525 - 530