EnCoSum: enhanced semantic features for multi-scale multi-modal source code summarization

被引:0
|
作者
Yuexiu Gao
Hongyu Zhang
Chen Lyu
机构
[1] Shandong Normal University,School of Information Science and Engineering
[2] Chongqing University,undefined
来源
关键词
Code summarization; Abstract syntax trees; Method name sequences; Cross-modal fusion; Deep learning;
D O I
暂无
中图分类号
学科分类号
摘要
Code summarization aims to generate concise natural language descriptions for a piece of code, which can help developers comprehend the source code. Analysis of current work shows that the extraction of syntactic and semantic features of source code is crucial for generating high-quality summaries. To provide a more comprehensive feature representation of source code from different perspectives, we propose an approach named EnCoSum, which enhances semantic features for the multi-scale multi-modal code summarization method. This method complements our previously proposed M2TS approach (multi-scale multi-modal approach based on Transformer for source code summarization), which uses the multi-scale method to capture Abstract Syntax Trees (ASTs) structural information more completely and accurately at multiple local and global levels. In addition, we devise a new cross-modal fusion method to fuse source code and AST features, which can highlight key features in each modality that help generate summaries. To obtain richer semantic information, we improve M2TS. First, we add data flow and control flow to ASTs, and added-edge ASTs, called Enhanced-ASTs (E-ASTs). In addition, we introduce method name sequences extracted in the source code, which exist more knowledge about critical tokens in the corresponding summaries and can help the model generate higher-quality summaries. We conduct extensive experiments on processed Java and Python datasets and evaluate our approach via the four most commonly used machine translation metrics. The experimental results demonstrate that EnCoSum is effective and outperforms current state-of-the-art methods. Further, we perform ablation experiments on each of the model’s key components, and the results show that they all contribute to the performance of EnCoSum.
引用
收藏
相关论文
共 50 条
  • [31] Multi-Modal Multi-Scale Deep Learning for Large-Scale Image Annotation
    Niu, Yulei
    Lu, Zhiwu
    Wen, Ji-Rong
    Xiang, Tao
    Chang, Shih-Fu
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (04) : 1720 - 1731
  • [32] A Multi-Modal Transformer-based Code Summarization Approach for Smart Contracts
    Yang, Zhen
    Keung, Jacky
    Yu, Xiao
    Gu, Xiaodong
    Wei, Zhengyuan
    Ma, Xiaoxue
    Zhang, Miao
    2021 IEEE/ACM 29TH INTERNATIONAL CONFERENCE ON PROGRAM COMPREHENSION (ICPC 2021), 2021, : 1 - 12
  • [33] Semantic analysis of multi-modal features in scientific and technical literature
    Wang, Ruijia
    Liu, Yao
    ICIC Express Letters, Part B: Applications, 2012, 3 (04): : 901 - 908
  • [34] Enhanced multi-scale networks for semantic segmentation
    Li, Tianping
    Cui, Zhaotong
    Han, Yu
    Li, Guanxing
    Li, Meng
    Wei, Dongmei
    COMPLEX & INTELLIGENT SYSTEMS, 2024, 10 (02) : 2557 - 2568
  • [35] Enhanced multi-scale networks for semantic segmentation
    Tianping Li
    Zhaotong Cui
    Yu Han
    Guanxing Li
    Meng Li
    Dongmei Wei
    Complex & Intelligent Systems, 2024, 10 : 2557 - 2568
  • [36] Multi-modal Multi-scale Attention Guidance in Cyber-Physical Environments
    Reyes, Guillermo
    Alles, Alexandra
    IUI '21 - 26TH INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES, 2021, : 356 - 365
  • [37] Multi-scale and multi-modal contrastive learning network for biomedical time series
    Guo, Hongbo
    Xu, Xinzi
    Wu, Hao
    Liu, Bin
    Xia, Jiahui
    Cheng, Yibang
    Guo, Qianhui
    Chen, Yi
    Xu, Tingyan
    Wang, Jiguang
    Wang, Guoxing
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2025, 106
  • [38] Multi-scale and multi-modal imaging study of mantle xenoliths and petrological implications
    Venier, Marco
    Ziberna, Luca
    Mancini, Lucia
    Kao, Alexander P.
    Bernardini, Federico
    Roncoroni, Giacomo
    Milani, Sula
    Youbi, Nasrrddine
    Majigsuren, Yondon
    De Min, Angelo
    Lenaz, Davide
    AMERICAN MINERALOGIST, 2024, 109 (05) : 882 - 895
  • [39] Multi-Modal and Multi-Scale Oral Diadochokinesis Analysis using Deep Learning
    Department of Electrical Engineering and Computer Science, University of Missouri, Columbia
    MO, United States
    不详
    MO, United States
    Proc. Appl. Imagery Pattern. Recogn. Workshop, 2021,
  • [40] MEDMCN: a novel multi-modal EfficientDet with multi-scale CapsNet for object detection
    Li, Xingye
    Liu, Jin
    Tang, Zhengyu
    Han, Bing
    Wu, Zhongdai
    JOURNAL OF SUPERCOMPUTING, 2024, 80 (09): : 12863 - 12890