Unified pre-training for program understanding and generation

被引:0
|
作者
Ahmad, Wasi Uddin [1 ]
Chakraborty, Saikat [2 ]
Ray, Baishakhi [2 ]
Chang, Kai-Wei [1 ]
机构
[1] University of California, Los Angeles, United States
[2] Columbia University, United States
来源
arXiv | 2021年
关键词
Broad spectrum - Code translation - Language generation - Legacy code - Natural languages - Pre-training - Program generation - Program understanding - Sequence models - Summarization and generations;
D O I
暂无
中图分类号
学科分类号
摘要
57
引用
收藏
相关论文
共 50 条
  • [41] Unified building change detection pre-training method with masked semantic annotations
    Quan, Yujun
    Yu, Anzhu
    Guo, Wenyue
    Lu, Xuanbei
    Jiang, Bingchun
    Zheng, Shulei
    He, Peipei
    INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION, 2023, 120
  • [42] ProQA: Structural Prompt-based Pre-training for Unified Question Answering
    Zhong, Wanjun
    Gao, Yifan
    Ding, Ning
    Qin, Yujia
    Liu, Zhiyuan
    Zhou, Ming
    Wang, Jiahai
    Yin, Jian
    Duan, Nan
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 4230 - 4243
  • [43] Multimodal Pre-Training Based on Graph Attention Network for Document Understanding
    Zhang, Zhenrong
    Ma, Jiefeng
    Du, Jun
    Wang, Licheng
    Zhang, Jianshu
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 6743 - 6755
  • [44] A Study into Pre-training Strategies for Spoken Language Understanding on Dysarthric Speech
    Wang, Pu
    BabaAli, Bagher
    Van Hamme, Hugo
    INTERSPEECH 2021, 2021, : 36 - 40
  • [45] ComicBERT: A Transformer Model and Pre-training Strategy for Contextual Understanding in Comics
    Soykan, Gurkan
    Yuret, Deniz
    Sezgin, Tevfik Metin
    DOCUMENT ANALYSIS AND RECOGNITION-ICDAR 2024 WORKSHOPS, PT I, 2024, 14935 : 257 - 281
  • [46] UIT: Unifying Pre-training Objectives for Image-Text Understanding
    Xu, Guoqiang
    Yan, Shenggang
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT V, 2023, 14258 : 572 - 585
  • [47] Multi-Modal Understanding and Generation for Medical Images and Text via Vision-Language Pre-Training
    Moon, Jong Hak
    Lee, Hyungyung
    Shin, Woncheol
    Kim, Young-Hak
    Choi, Edward
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2022, 26 (12) : 6070 - 6080
  • [48] MusicBERT: Symbolic Music Understanding with Large-Scale Pre-Training
    Zeng, Mingliang
    Tan, Xu
    Wang, Rui
    Ju, Zeqian
    Qin, Tao
    Liu, Tie-Yan
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 791 - 800
  • [49] Understanding Chinese Video and Language via Contrastive Multimodal Pre-Training
    Lei, Chenyi
    Luo, Shixian
    Liu, Yong
    He, Wanggui
    Wang, Jiamang
    Wang, Guoxin
    Tang, Haihong
    Miao, Chunyan
    Li, Houqiang
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 2567 - 2576
  • [50] Does a Pre-Training Program Influence Colonoscopy Proficiency during Fellowship?
    Kim, Duk Hwan
    Park, Soo Jung
    Cheon, Jae Hee
    Kim, Tae Il
    Kim, Won Ho
    Hong, Sung Pil
    PLOS ONE, 2016, 11 (10):