AdapterHub Playground: Simple and Flexible Few-Shot Learning with Adapters

被引:0
|
作者
Beck, Tilman [1 ]
Bohlender, Bela [1 ]
Viehmann, Christina [2 ]
Hane, Vincent [1 ]
Adamson, Yanik [1 ]
Khuri, Jaber [1 ]
Brossmann, Jonas [1 ]
Pfeiffer, Jonas [1 ]
Gurevych, Iryna [1 ]
机构
[1] Tech Univ Darmstadt, Dept Comp Sci, Ubiquitous Knowledge Proc Lab UKP Lab, Darmstadt, Germany
[2] Johannes Gutenberg Univ Mainz, Inst Publizist, Mainz, Germany
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The open-access dissemination of pretrained language models through online repositories has led to a democratization of state-of-the-art natural language processing (NLP) research. This also allows people outside of NLP to use such models and adapt them to specific use-cases. However, a certain amount of technical proficiency is still required which is an entry barrier for users who want to apply these models to a certain task but lack the necessary knowledge or resources. In this work, we aim to overcome this gap by providing a tool which allows researchers to leverage pretrained models without writing a single line of code. Built upon the parameter-efficient adapter modules for transfer learning, our AdapterHub Playground provides an intuitive interface, allowing the usage of adapters for prediction, training and analysis of textual data for a variety of NLP tasks. We present the tool's architecture and demonstrate its advantages with prototypical use-cases, where we show that predictive performance can easily be increased in a few-shot learning scenario. Finally, we evaluate its usability in a user study. We provide the code and a live interface(1).
引用
收藏
页码:61 / 75
页数:15
相关论文
共 50 条
  • [11] Few-Shot Lifelong Learning
    Mazumder, Pratik
    Singh, Pravendra
    Rai, Piyush
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 2337 - 2345
  • [12] Flexible few-shot class-incremental learning with prototype container
    Xu, Xinlei
    Wang, Zhe
    Fu, Zhiling
    Guo, Wei
    Chi, Ziqiu
    Li, Dongdong
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (15): : 10875 - 10889
  • [13] An Embarrassingly Simple Approach to Semi-Supervised Few-Shot Learning
    Wei, Xiu-Shen
    Xu, He-Yang
    Zhang, Faen
    Peng, Yuxin
    Zhou, Wei
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [14] Detach and unite: A simple meta-transfer for few-shot learning
    Zheng, Yaoyue
    Zhang, Xuetao
    Tian, Zhiqiang
    Zeng, Wei
    Du, Shaoyi
    KNOWLEDGE-BASED SYSTEMS, 2023, 277
  • [15] Co-Learning for Few-Shot Learning
    Rui Xu
    Lei Xing
    Shuai Shao
    Baodi Liu
    Kai Zhang
    Weifeng Liu
    Neural Processing Letters, 2022, 54 : 3339 - 3356
  • [16] RankDNN: Learning to Rank for Few-Shot Learning
    Guo, Qianyu
    Gong Haotong
    Wei, Xujun
    Fu, Yanwei
    Yu, Yizhou
    Zhang, Wenqiang
    Ge, Weifeng
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 1, 2023, : 728 - 736
  • [17] Learning about few-shot concept learning
    Ananya Rastogi
    Nature Computational Science, 2022, 2 : 698 - 698
  • [18] Co-Learning for Few-Shot Learning
    Xu, Rui
    Xing, Lei
    Shao, Shuai
    Liu, Baodi
    Zhang, Kai
    Liu, Weifeng
    NEURAL PROCESSING LETTERS, 2022, 54 (04) : 3339 - 3356
  • [19] Federated Few-Shot Learning with Adversarial Learning
    Fan, Chenyou
    Huang, Jianwei
    2021 19TH INTERNATIONAL SYMPOSIUM ON MODELING AND OPTIMIZATION IN MOBILE, AD HOC, AND WIRELESS NETWORKS (WIOPT), 2021,
  • [20] Personalized Federated Few-Shot Learning
    Zhao, Yunfeng
    Yu, Guoxian
    Wang, Jun
    Domeniconi, Carlotta
    Guo, Maozu
    Zhang, Xiangliang
    Cui, Lizhen
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (02) : 2534 - 2544