On Effectiveness of Further Pre-training on BERT Models for Story Point Estimation

被引:0
|
作者
Amasaki, Sousuke [1 ]
机构
[1] Okayama Prefectural Univ, Dept Syst Engn, Soja, Okayama, Japan
关键词
effort estimation; BERT; further pre-training; story points;
D O I
10.1145/3617555.3617877
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
CONTEXT: Recent studies on story point estimation used deep learning-based language models. These language models were pre-trained on general corpora. However, using language models further pre-trained with specific corpora might be effective. OBJECTIVE: To examine the effectiveness of further pre-trained language models for the predictive performance of story point estimation. METHOD: Two types of further pre-trained language models, namely, domain-specific and repository-specific models, were compared with off-the-shelf models and Deep-SE. The estimation performance was evaluated with 16 project data. RESULTS: The effectiveness of domain-specific and repository-specific models were limited though they were better than the base model they further pre-trained. CONCLUSION: The effect of further pre-training was small. Large off-the-shelf models might be better to be chosen.
引用
收藏
页码:49 / 53
页数:5
相关论文
共 50 条
  • [1] FEDBFPT: An Efficient Federated Learning Framework for BERT Further Pre-training
    Wang, Xin'ao
    Li, Huan
    Chen, Ke
    Shou, Lidan
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 4344 - 4352
  • [2] POS-BERT: Point cloud one-stage BERT pre-training
    Fu, Kexue
    Gao, Peng
    Liu, Shaolei
    Qu, Linhao
    Gao, Longxiang
    Wang, Manning
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 240
  • [3] Research frontiers of pre-training mathematical models based on BERT
    Li, Guang
    Wang, Wennan
    Zhu, Liukai
    Peng, Jun
    Li, Xujia
    Luo, Ruijie
    2022 IEEE INTERNATIONAL CONFERENCE ON ELECTRICAL ENGINEERING, BIG DATA AND ALGORITHMS (EEBDA), 2022, : 154 - 158
  • [4] Point Cloud Pre-training with Diffusion Models
    Zheng, Xiao
    Huang, Xiaoshui
    Mei, Guofeng
    Hou, Yuenan
    Lyu, Zhaoyang
    Dai, Bo
    Ouyang, Wanli
    Gong, Yongshun
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 22935 - 22945
  • [5] POS-BERT: Point cloud one-stage BERT pre-training[Formula presented]
    Fu, Kexue
    Gao, Peng
    Liu, Shaolei
    Qu, Linhao
    Gao, Longxiang
    Wang, Manning
    Expert Systems with Applications, 2024, 240
  • [6] Point-BERT: Pre-training 3D Point Cloud Transformers with Masked Point Modeling
    Yu, Xumin
    Tang, Lulu
    Rao, Yongming
    Huang, Tiejun
    Zhou, Jie
    Lu, Jiwen
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 19291 - 19300
  • [7] Trajectory-BERT: Trajectory Estimation Based on BERT Trajectory Pre-Training Model and Particle Filter Algorithm
    Wu, You
    Yu, Hongyi
    Du, Jianping
    Ge, Chenglong
    SENSORS, 2023, 23 (22)
  • [8] Entity Enhanced BERT Pre-training for Chinese NER
    Jia, Chen
    Shi, Yuefeng
    Yang, Qinrong
    Zhang, Yue
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 6384 - 6396
  • [9] Pre-Training With Whole Word Masking for Chinese BERT
    Cui, Yiming
    Che, Wanxiang
    Liu, Ting
    Qin, Bing
    Yang, Ziqing
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2021, 29 : 3504 - 3514
  • [10] Pre-training Two BERT-Like Models for Moroccan Dialect: MorRoBERTa and MorrBERT
    Moussaoui O.
    El Younoussi Y.
    Mendel, 2023, 29 (01) : 55 - 61