Parameter-Efficient Sparse Retrievers and Rerankers Using Adapters

被引:2
|
作者
Pal, Vaishali [1 ,2 ]
Lassance, Carlos [2 ]
Dejean, Herve [2 ]
Clinchant, Stephane [2 ]
机构
[1] Univ Amsterdam, IRLab, Amsterdam, Netherlands
[2] Naver Labs Europe, Meylan, France
关键词
Adapters; Information Retrieval; Sparse neural retriever;
D O I
10.1007/978-3-031-28238-6_2
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Parameter-Efficient transfer learning with Adapters have been studied in Natural Language Processing (NLP) as an alternative to full fine-tuning. Adapters are memory-efficient and scale well with downstream tasks by training small bottle-neck layers added between transformer layers while keeping the large pretrained language model (PLMs) frozen. In spite of showing promising results in NLP, these methods are under-explored in Information Retrieval. While previous studies have only experimented with dense retriever or in a cross lingual retrieval scenario, in this paper we aim to complete the picture on the use of adapters in IR. First, we study adapters for SPLADE, a sparse retriever, for which adapters not only retain the efficiency and effectiveness otherwise achieved by finetuning, but are memory-efficient and orders of magnitude lighter to train. We observe that Adapters-SPLADE not only optimizes just 2% of training parameters, but outperforms fully fine-tuned counterpart and existing parameter-efficient dense IR models on IR benchmark datasets. Secondly, we address domain adaptation of neural retrieval thanks to adapters on cross-domain BEIR datasets and TripClick. Finally, we also consider knowledge sharing between rerankers and first stage rankers. Overall, our study complete the examination of adapters for neural IR. (The code can be found at: https://github.com/naver/splade/tree/adapter-splade.)
引用
收藏
页码:16 / 31
页数:16
相关论文
共 50 条
  • [11] Parameter-Efficient Transfer Learning for NLP
    Houlsby, Neil
    Giurgiu, Andrei
    Jastrzebski, Stanislaw
    Morrone, Bruna
    de laroussilhe, Quentin
    Gesmundo, Andrea
    Attariyan, Mona
    Gelly, Sylvain
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [12] Composing Parameter-Efficient Modules with Arithmetic Operations
    Zhang, Jinghan
    Chen, Shiqi
    Liu, Junteng
    He, Junxian
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [13] The Power of Scale for Parameter-Efficient Prompt Tuning
    Lester, Brian
    Al-Rfou, Rami
    Constant, Noah
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 3045 - 3059
  • [14] PARAMETER-EFFICIENT VISION TRANSFORMER WITH LINEAR ATTENTION
    Zhao, Youpeng
    Tang, Huadong
    Jiang, Yingying
    Yong, A.
    Wu, Qiang
    Wang, Jun
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 1275 - 1279
  • [15] PARAMETER-EFFICIENT HYDROLOGIC INFILTRATION-MODEL
    SMITH, RE
    PARLANGE, JY
    TRANSACTIONS-AMERICAN GEOPHYSICAL UNION, 1978, 59 (04): : 281 - 281
  • [16] Parameter-Efficient Tuning with Special Token Adaptation
    Yang, Xiaocong
    Huang, James Y.
    Zhou, Wenxuan
    Chen, Muhao
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 865 - 872
  • [17] On the Effectiveness of Parameter-Efficient Fine-Tuning
    Fu, Zihao
    Yang, Haoran
    So, Anthony Man-Cho
    Lam, Wai
    Bing, Lidong
    Collier, Nigel
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 11, 2023, : 12799 - 12807
  • [18] PARAMETER-EFFICIENT HYDROLOGIC INFILTRATION-MODEL
    SMITH, RE
    PARLANGE, JY
    WATER RESOURCES RESEARCH, 1978, 14 (03) : 533 - 538
  • [19] Parameter-Efficient Model Adaptation for Vision Transformers
    He, Xuehai
    Li, Chuanyuan
    Zhang, Pengchuan
    Yang, Jianwei
    Wang, Xin Eric
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 1, 2023, : 817 - 825
  • [20] PET: Parameter-efficient Knowledge Distillation on Transformer
    Jeon, Hyojin
    Park, Seungcheol
    Kim, Jin-Gee
    Kang, U.
    PLOS ONE, 2023, 18 (07):