Exploring the Benefits of Training Expert Language Models over Instruction Tuning

被引:0
|
作者
Jang, Joel [1 ]
Kim, Seungone [1 ]
Ye, Seonghyeon [1 ]
Kim, Doyoung [1 ]
Logeswaran, Lajanugen [2 ]
Lee, Moontae [2 ,3 ]
Lee, Kyungjae [2 ]
Seo, Minjoon [1 ]
机构
[1] KAIST, Korea, Republic of
[2] LG AI Research, Korea, Republic of
[3] University of Illinois, Chicago, United States
关键词
Compilation and indexing terms; Copyright 2024 Elsevier Inc;
D O I
暂无
中图分类号
学科分类号
摘要
Computational linguistics
引用
收藏
页码:14702 / 14729
相关论文
共 50 条
  • [41] Empowering Legal Citation Recommendation via Efficient Instruction-Tuning of Pre-trained Language Models
    Wang, Jie
    Bansal, Kanha
    Arapakis, Ioannis
    Ge, Xuri
    Jose, Joemon M.
    ADVANCES IN INFORMATION RETRIEVAL, ECIR 2024, PT I, 2024, 14608 : 310 - 324
  • [42] DolphCoder: Echo-Locating Code Large Language Models with Diverse and Multi-Objective Instruction Tuning
    Wang, Yejie
    He, Keqing
    Dong, Guanting
    Wang, Pei
    Zeng, Weihao
    Diao, Muxi
    Zhang, Mengdi
    Wang, Jingang
    Cai, Xunliang
    Xu, Weiran
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 4706 - 4721
  • [43] Exploring the Influence of Language on Assessment Given a Mismatch Between Language of Instruction and Language of Practice
    Diab, Mohammad, I
    Nasr, Ziad G.
    El-Hajj, Maguy S.
    Elewa, Hazem
    El-Geed, Hager A.
    Wilby, Kyle John
    SIMULATION IN HEALTHCARE-JOURNAL OF THE SOCIETY FOR SIMULATION IN HEALTHCARE, 2019, 14 (04): : 271 - 275
  • [44] Scaling Instruction-Finetuned Language Models
    Chung, Hyung Won
    Hou, Le
    Longpre, Shayne
    Zoph, Barret
    Tai, Yi
    Fedus, William
    Li, Yunxuan
    Wang, Xuezhi
    Dehghani, Mostafa
    Brahma, Siddhartha
    Webson, Albert
    Gu, Shixiang Shane
    Dai, Zhuyun
    Suzgun, Mirac
    Chen, Xinyun
    Chowdhery, Aakanksha
    Castro-Ros, Alex
    Pellat, Marie
    Robinson, Kevin
    Valter, Dasha
    Narang, Sharan
    Mishra, Gaurav
    Yu, Adams
    Zhao, Vincent
    Huang, Yanping
    Dai, Andrew
    Yu, Hongkun
    Petrov, Slav
    Chi, Ed H.
    Dean, Jeff
    Devlin, Jacob
    Roberts, Adam
    Zhou, Denny
    Le, Quoc, V
    Wei, Jason
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25
  • [45] MINT: Boosting Audio-Language Model via Multi-Target Pre-Training and Instruction Tuning
    Zhao, Hang
    Xing, Yifei
    Yu, Zhesong
    Zhu, Bilei
    Lu, Lu
    Ma, Zejun
    INTERSPEECH 2024, 2024, : 52 - 56
  • [46] SPDF: Sparse Pre-training and Dense Fine-tuning for Large Language Models
    Thangarasa, Vithursan
    Gupta, Abhay
    Marshall, William
    Li, Tianda
    Leong, Kevin
    DeCoste, Dennis
    Lie, Sean
    Saxena, Shreyas
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2023, 216 : 2134 - 2146
  • [47] Exploring Sensory Knowledge and Pre-training Language Models for Chinese Metaphor Detection
    Zhao, Qingqing
    Xiang, Xue
    Wang, Zhongqing
    2024 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING, IALP 2024, 2024, : 120 - 126
  • [48] Usability of Computer Models in Chemistry Instruction: Results of Expert Evaluation
    Machkova, Veronika
    Bilek, Martin
    DIVAI 2014: 10TH INTERNATIONAL SCIENTIFIC CONFERENCE ON DISTANCE LEARNING IN APPLIED INFORMATICS, 2014, : 117 - 127
  • [49] Training personal robots using natural language instruction
    Lauria, S
    Bugmann, G
    Kyriacou, T
    Bos, J
    Klein, E
    IEEE INTELLIGENT SYSTEMS, 2001, 16 (05) : 38 - 45
  • [50] Exploring the amount and type of writing instruction during language arts instruction in kindergarten classrooms
    Cynthia S. Puranik
    Stephanie Al Otaiba
    Jessica Folsom Sidler
    Luana Greulich
    Reading and Writing, 2014, 27 : 213 - 236