On the Validity of Pre-Trained Transformers for Natural Language Processing in the Software Engineering Domain

被引:24
|
作者
von der Mosel, Julian [1 ]
Trautsch, Alexander [1 ]
Herbold, Steffen [2 ]
机构
[1] Univ Gottingen, Inst Comp Sci, D-30332 Gottingen, GA, Germany
[2] Tech Univ Clausthal, Inst Software & Syst Engn, D-38678 Clausthal Zellerfeld, Germany
关键词
Transformers; Task analysis; Context modeling; Bit error rate; Data models; Software engineering; Adaptation models; Natural language processing; software engineering; transformers; SENTIMENT ANALYSIS;
D O I
10.1109/TSE.2022.3178469
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Transformers are the current state-of-the-art of natural language processing in many domains and are using traction within software engineering research as well. Such models are pre-trained on large amounts of data, usually from the general domain. However, we only have a limited understanding regarding the validity of transformers within the software engineering domain, i.e., how good such models are at understanding words and sentences within a software engineering context and how this improves the state-of-the-art. Within this article, we shed light on this complex, but crucial issue. We compare BERT transformer models trained with software engineering data with transformers based on general domain data in multiple dimensions: their vocabulary, their ability to understand which words are missing, and their performance in classification tasks. Our results show that for tasks that require understanding of the software engineering context, pre-training with software engineering data is valuable, while general domain models are sufficient for general language understanding, also within the software engineering domain.
引用
收藏
页码:1487 / 1507
页数:21
相关论文
共 50 条
  • [1] A Study of Pre-trained Language Models in Natural Language Processing
    Duan, Jiajia
    Zhao, Hui
    Zhou, Qian
    Qiu, Meikang
    Liu, Meiqin
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON SMART CLOUD (SMARTCLOUD 2020), 2020, : 116 - 121
  • [2] Pre-trained models for natural language processing: A survey
    Qiu XiPeng
    Sun TianXiang
    Xu YiGe
    Shao YunFan
    Dai Ning
    Huang XuanJing
    [J]. SCIENCE CHINA-TECHNOLOGICAL SCIENCES, 2020, 63 (10) : 1872 - 1897
  • [3] Pre-trained models for natural language processing: A survey
    QIU XiPeng
    SUN TianXiang
    XU YiGe
    SHAO YunFan
    DAI Ning
    HUANG XuanJing
    [J]. Science China Technological Sciences, 2020, 63 (10) : 1872 - 1897
  • [4] Pre-trained models for natural language processing: A survey
    QIU XiPeng
    SUN TianXiang
    XU YiGe
    SHAO YunFan
    DAI Ning
    HUANG XuanJing
    [J]. Science China(Technological Sciences), 2020, (10) : 1872 - 1897
  • [5] Pre-trained models for natural language processing: A survey
    XiPeng Qiu
    TianXiang Sun
    YiGe Xu
    YunFan Shao
    Ning Dai
    XuanJing Huang
    [J]. Science China Technological Sciences, 2020, 63 : 1872 - 1897
  • [6] Revisiting Pre-trained Models for Chinese Natural Language Processing
    Cui, Yiming
    Che, Wanxiang
    Liu, Ting
    Qin, Bing
    Wang, Shijin
    Hu, Guoping
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 657 - 668
  • [7] A pre-trained BERT for Korean medical natural language processing
    Kim, Yoojoong
    Kim, Jong-Ho
    Lee, Jeong Moon
    Jang, Moon Joung
    Yum, Yun Jin
    Kim, Seongtae
    Shin, Unsub
    Kim, Young-Min
    Joo, Hyung Joon
    Song, Sanghoun
    [J]. SCIENTIFIC REPORTS, 2022, 12 (01)
  • [8] A pre-trained BERT for Korean medical natural language processing
    Yoojoong Kim
    Jong-Ho Kim
    Jeong Moon Lee
    Moon Joung Jang
    Yun Jin Yum
    Seongtae Kim
    Unsub Shin
    Young-Min Kim
    Hyung Joon Joo
    Sanghoun Song
    [J]. Scientific Reports, 12
  • [9] Generative pre-trained transformers (GPT) for surface engineering
    Kamnis, Spyros
    [J]. SURFACE & COATINGS TECHNOLOGY, 2023, 466
  • [10] Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Processing
    Huawei Technologies Co., Ltd.
    不详
    不详
    [J]. Proc. Conf. Empir. Methods Nat. Lang. Process., EMNLP, (3135-3151):