Parameter-Efficient Sparse Retrievers and Rerankers Using Adapters

被引:2
|
作者
Pal, Vaishali [1 ,2 ]
Lassance, Carlos [2 ]
Dejean, Herve [2 ]
Clinchant, Stephane [2 ]
机构
[1] Univ Amsterdam, IRLab, Amsterdam, Netherlands
[2] Naver Labs Europe, Meylan, France
关键词
Adapters; Information Retrieval; Sparse neural retriever;
D O I
10.1007/978-3-031-28238-6_2
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Parameter-Efficient transfer learning with Adapters have been studied in Natural Language Processing (NLP) as an alternative to full fine-tuning. Adapters are memory-efficient and scale well with downstream tasks by training small bottle-neck layers added between transformer layers while keeping the large pretrained language model (PLMs) frozen. In spite of showing promising results in NLP, these methods are under-explored in Information Retrieval. While previous studies have only experimented with dense retriever or in a cross lingual retrieval scenario, in this paper we aim to complete the picture on the use of adapters in IR. First, we study adapters for SPLADE, a sparse retriever, for which adapters not only retain the efficiency and effectiveness otherwise achieved by finetuning, but are memory-efficient and orders of magnitude lighter to train. We observe that Adapters-SPLADE not only optimizes just 2% of training parameters, but outperforms fully fine-tuned counterpart and existing parameter-efficient dense IR models on IR benchmark datasets. Secondly, we address domain adaptation of neural retrieval thanks to adapters on cross-domain BEIR datasets and TripClick. Finally, we also consider knowledge sharing between rerankers and first stage rankers. Overall, our study complete the examination of adapters for neural IR. (The code can be found at: https://github.com/naver/splade/tree/adapter-splade.)
引用
收藏
页码:16 / 31
页数:16
相关论文
共 50 条
  • [1] Conditional Adapters: Parameter-efficient Transfer Learning with Fast Inference
    Lei, Tao
    Bai, Junwen
    Brahma, Siddhartha
    Ainslie, Joshua
    Lee, Kenton
    Zhou, Yanqi
    Du, Nan
    Zhao, Vincent Y.
    Wu, Yuexin
    Li, Bo
    Zhang, Yu
    Chang, Ming-Wei
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [2] Residual Adapters for Parameter-Efficient ASR Adaptation to Atypical and Accented Speech
    Tomanek, Katrin
    Zayats, Vicky
    Padfield, Dirk
    Vaillancourt, Kara
    Biadsy, Fadi
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 6751 - 6760
  • [3] Parameter-Efficient Prompt Tuning Makes Generalized and Calibrated Neural Text Retrievers
    Tam, Weng Lam
    Liu, Xiao
    Ji, Kaixuan
    Xue, Lilong
    Zhang, Xingjian
    Dong, Yuxiao
    Lin, Jiahua
    Hu, Maodi
    Tang, Jie
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 13117 - 13130
  • [4] Generalized Kronecker-based Adapters for Parameter-efficient Fine-tuning of Vision Transformers
    Edalati, Ali
    Hameed, Marawan Gamal Abdel
    Mosleh, Ali
    2023 20TH CONFERENCE ON ROBOTS AND VISION, CRV, 2023, : 97 - 104
  • [5] PreAdapter: Sparse Adaptive Parameter-efficient Transfer Learning for Language Models
    Mao, Chenyang
    Jin, Xiaoxiao
    Yue, Dengfeng
    Leng, Tuo
    2024 7TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND BIG DATA, ICAIBD 2024, 2024, : 218 - 225
  • [6] LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models
    Hu, Zhiqiang
    Wang, Lei
    Lan, Yihuai
    Xu, Wanyu
    Lim, Ee-Peng
    Bing, Lidong
    Xu, Xing
    Poria, Soujanya
    Lee, Roy Ka-Wei
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 5254 - 5276
  • [7] One is Not Enough: Parameter-Efficient Fine-Tuning With Multiplicative Sparse Factorization
    Chen, Xuxi
    Chen, Tianlong
    Cheng, Yu
    Chen, Weizhu
    Awadallah, Ahmed Hassan
    Wang, Zhangyang
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2024, 18 (06) : 1059 - 1069
  • [8] Parameter-Efficient Masking Networks
    Bai, Yue
    Wang, Huan
    Ma, Xu
    Zhang, Yitian
    Tao, Zhiqiang
    Fu, Yun
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [9] Shapeshifter: a Parameter-efficient Transformer using Factorized Reshaped Matrices
    Panahi, Aliakbar
    Saeedi, Seyran
    Arodz, Tom
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [10] Parameter-efficient deep probabilistic forecasting
    Sprangers, Olivier
    Schelter, Sebastian
    de Rijke, Maarten
    INTERNATIONAL JOURNAL OF FORECASTING, 2023, 39 (01) : 332 - 345