AUGER: Automatically Generating Review Comments with Pre-training Models

被引:0
|
作者
Li, Lingwei [1 ]
Yang, Li [2 ]
Jiang, Huaxi [1 ]
Yan, Jun [3 ]
Luo, Tiejian [4 ]
Hua, Zihan [5 ]
Liang, Geng [2 ]
Zuo, Chun [6 ]
机构
[1] Institute of Software, CAS, Univ. of Chinese Academy of Sciences, Beijing, China
[2] Institute of Software, CAS, Beijing, China
[3] State Key Laboratory of Computer Science, Institute of Software, CAS, Univ. of Chinese Academy of Sciences, Beijing, China
[4] Univ. of Chinese Academy of Sciences, Beijing, China
[5] Wuhan University, Univ. of Chinese Academy of Sciences, Wuhan, China
[6] Sinosoft Company Limited, Beijing, China
来源
arXiv | 2022年
关键词
D O I
暂无
中图分类号
学科分类号
摘要
Machine learning
引用
收藏
相关论文
共 50 条
  • [31] Pre-training phenotyping classifiers
    Dligach, Dmitriy
    Afshar, Majid
    Miller, Timothy
    [J]. JOURNAL OF BIOMEDICAL INFORMATICS, 2021, 113 (113)
  • [32] Exploring Visual Pre-training for Robot Manipulation: Datasets, Models and Methods
    Jing, Ya
    Zhu, Xuelin
    Liu, Xingbin
    Sima, Qie
    Yang, Taozheng
    Feng, Yunhai
    Kong, Tao
    [J]. 2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2023, : 11390 - 11395
  • [33] On Effectiveness of Further Pre-training on BERT Models for Story Point Estimation
    Amasaki, Sousuke
    [J]. PROCEEDINGS OF THE 19TH INTERNATIONAL CONFERENCE ON PREDICTIVE MODELS AND DATA ANALYTICS IN SOFTWARE ENGINEERING, PROMISE 2023, 2023, : 49 - 53
  • [34] Pre-Training Language Models for Identifying Patronizing and Condescending Language: An Analysis
    Perez-Almendros, Carla
    Espinosa-Anke, Luis
    Schockaert, Steven
    [J]. LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 3902 - 3911
  • [35] The Value of Pre-Training for Deep Learning Acute Stroke Triaging Models
    Yu, Yannan
    Xie, Yuan
    Gong, Enhao
    Thamm, Thoralf
    Ouyang, Jiahong
    Christensen, Soren
    Lansberg, Maarten
    Albers, Gregory
    Zaharchuk, Greg
    [J]. STROKE, 2020, 51
  • [36] Towards Adversarial Attack on Vision-Language Pre-training Models
    Zhang, Jiaming
    Yi, Qi
    Sang, Jitao
    [J]. PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 5005 - 5013
  • [37] Code Smell Detection Research Based on Pre-training and Stacking Models
    Zhang, Dongwen
    Song, Shuai
    Zhang, Yang
    Liu, Haiyang
    Shen, Gaojie
    [J]. IEEE LATIN AMERICA TRANSACTIONS, 2024, 22 (01) : 22 - 30
  • [38] Pre-training and Evaluating Transformer-based Language Models for Icelandic
    Daoason, Jon Friorik
    Loftsson, Hrafn
    [J]. LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 7386 - 7391
  • [39] Rethinking Pre-training and Self-training
    Zoph, Barret
    Ghiasi, Golnaz
    Lin, Tsung-Yi
    Cui, Yin
    Liu, Hanxiao
    Cubuk, Ekin D.
    Le, Quoc V.
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [40] Automatically Generating Models for Botnet Detection
    Wurzinger, Peter
    Bilge, Leyla
    Holz, Thorsten
    Goebel, Jan
    Kruegel, Christopher
    Kirda, Engin
    [J]. COMPUTER SECURITY - ESORICS 2009, PROCEEDINGS, 2009, 5789 : 232 - +