What Do They Capture? - A Structural Analysis of Pre-Trained Language Models for Source Code

被引:23
|
作者
Wan, Yao [1 ,4 ]
Zhao, Wei [1 ,4 ]
Zhang, Hongyu [2 ]
Sui, Yulei [3 ]
Xu, Guandong [3 ]
Jin, Hai [1 ,4 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Comp Sci & Technol, Wuhan, Peoples R China
[2] Univ Newcastle, Newcastle, NSW, Australia
[3] Univ Technol Sydney, Sch Comp Sci, Sydney, NSW, Australia
[4] HUST, Natl Engn Res Ctr Big Data Technol & Syst, Serv Comp Technol & Syst Lab, Cluster & Grid Comp Lab, Wuhan 430074, Peoples R China
基金
中国国家自然科学基金;
关键词
Code representation; deep learning; pre-trained language model; probing; attention analysis; syntax tree induction;
D O I
10.1145/3510003.3510050
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Recently, many pre-trained language models for source code have been proposed to model the context of code and serve as a basis for downstream code intelligence tasks such as code completion, code search, and code summarization. These models leverage masked pre-training and Transformer and have achieved promising results. However, currently there is still little progress regarding interpretability of existing pre-trained code models. It is not clear why these models work and what feature correlations they can capture. In this paper, we conduct a thorough structural analysis aiming to provide an interpretation of pre-trained language models for source code (e.g., CodeBERT, and GraphCodeBERT) from three distinctive perspectives: (1) attention analysis, (2) probing on the word embedding, and (3) syntax tree induction. Through comprehensive analysis, this paper reveals several insightful findings that may inspire future studies: (1) Attention aligns strongly with the syntax structure of code. (2) Pre-training language models of code can preserve the syntax structure of code in the intermediate representations of each Transformer layer. (3) The pre-trained models of code have the ability of inducing syntax trees of code. Theses findings suggest that it may be helpful to incorporate the syntax structure of code into the process of pre-training for better code representations.
引用
下载
收藏
页码:2377 / 2388
页数:12
相关论文
共 50 条
  • [21] Pre-trained language models in medicine: A survey *
    Luo, Xudong
    Deng, Zhiqi
    Yang, Binxia
    Luo, Michael Y.
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2024, 154
  • [22] Compressing Pre-trained Models of Code into 3 MB
    Shi, Jieke
    Yang, Zhou
    Xu, Bowen
    Kang, Hong Jin
    Lo, David
    PROCEEDINGS OF THE 37TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, ASE 2022, 2022,
  • [23] CodeAttack: Code-Based Adversarial Attacks for Pre-trained Programming Language Models
    Jha, Akshita
    Reddy, Chandan K.
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 12, 2023, : 14892 - 14900
  • [24] PTM-APIRec: Leveraging Pre-trained Models of Source Code in API Recommendation
    Li, Zhihao
    Li, Chuanyi
    Tang, Ze
    Huang, Wanhong
    Ge, Jidong
    Luo, Bin
    Ng, Vincent
    Wang, Ting
    Hu, Yucheng
    Zhang, Xiaopeng
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2024, 33 (03)
  • [25] Large pre-trained language models contain human-like biases of what is right and wrong to do
    Schramowski, Patrick
    Turan, Cigdem
    Andersen, Nico
    Rothkopf, Constantin A.
    Kersting, Kristian
    NATURE MACHINE INTELLIGENCE, 2022, 4 (03) : 258 - +
  • [26] Large pre-trained language models contain human-like biases of what is right and wrong to do
    Patrick Schramowski
    Cigdem Turan
    Nico Andersen
    Constantin A. Rothkopf
    Kristian Kersting
    Nature Machine Intelligence, 2022, 4 : 258 - 268
  • [27] Enhancing Turkish Sentiment Analysis Using Pre-Trained Language Models
    Koksal, Omer
    29TH IEEE CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS (SIU 2021), 2021,
  • [28] A Study of Pre-trained Language Models in Natural Language Processing
    Duan, Jiajia
    Zhao, Hui
    Zhou, Qian
    Qiu, Meikang
    Liu, Meiqin
    2020 IEEE INTERNATIONAL CONFERENCE ON SMART CLOUD (SMARTCLOUD 2020), 2020, : 116 - 121
  • [29] Diet Code Is Healthy: Simplifying Programs for Pre-trained Models of Code
    Zhang, Zhaowei
    Zhang, Hongyu
    Shen, Beijun
    Gu, Xiaodong
    PROCEEDINGS OF THE 30TH ACM JOINT MEETING EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, ESEC/FSE 2022, 2022, : 1073 - 1084
  • [30] From Cloze to Comprehension: Retrofitting Pre-trained Masked Language Models to Pre-trained Machine Reader
    Xu, Weiwen
    Li, Xin
    Zhang, Wenxuan
    Zhou, Meng
    Lam, Wai
    Si, Luo
    Bing, Lidong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,