Enhancing Parallelization with OpenMP through Multi-Modal Transformer Learning

被引:0
|
作者
Chen, Yuehua [1 ]
Yuan, Huaqiang [1 ]
Hou, Fengyao [2 ,3 ]
Hu, Peng [2 ,3 ]
机构
[1] Dongguan Univ Technol, Dongguan, Peoples R China
[2] Chinese Acad Sci, Inst High Energy Phys, Beijing, Peoples R China
[3] Spallat Neutron Source Sci Ctr, Dongguan, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
component; OpenMP; Natural Language Processing; Abstract Syntax Trees; Parallelization;
D O I
10.1109/ICCEA62105.2024.10603704
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The popularity of multicore processors and the rise of High Performance Computing as a Service (HPCaaS) have made parallel programming essential to fully utilize the performance of multicore systems. OpenMP, a widely adopted shared-memory parallel programming model, is favored for its ease of use. However, it is still challenging to assist and accelerate automation of its parallelization. Although existing automation tools such as Cetus and DiscoPoP to simplify the parallelization, there are still limitations when dealing with complex data dependencies and control flows. Inspired by the success of deep learning in the field of Natural Language Processing (NLP), this study adopts a Transformer-based model to tackle the problems of automatic parallelization of OpenMP instructions. We propose a novel Transformer-based multimodal model, ParaMP, to improve the accuracy of OpenMP instruction classification. The ParaMP model not only takes into account the sequential features of the code text, but also incorporates the code structural features and enriches the input features of the model by representing the Abstract Syntax Trees (ASTs) corresponding to the codes in the form of binary trees. In addition, we built a BTCode dataset, which contains a large number of C/C++ code snippets and their corresponding simplified AST representations, to provide a basis for model training. Experimental evaluation shows that our model outperforms other existing automated tools and models in key performance metrics such as F1 score and recall. This study shows a significant improvement on the accuracy of OpenMP instruction classification by combining sequential and structural features of code text, which will provide a valuable insight into deep learning techniques to programming tasks.
引用
收藏
页码:465 / 469
页数:5
相关论文
共 50 条
  • [1] Enhancing heart failure diagnosis through multi-modal data integration and deep learning
    Liu, Yi
    Li, Dengao
    Zhao, Jumin
    Liang, Yuchen
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (18) : 55259 - 55281
  • [2] Enhancing heart failure diagnosis through multi-modal data integration and deep learning
    Yi Liu
    Dengao Li
    Jumin Zhao
    Yuchen Liang
    Multimedia Tools and Applications, 2024, 83 : 55259 - 55281
  • [3] Multi-Modal Knowledge Graph Transformer Framework for Multi-Modal Entity Alignment
    Li, Qian
    Ji, Cheng
    Guo, Shu
    Liang, Zhaoji
    Wang, Lihong
    Li, Jianxin
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 987 - 999
  • [4] Hierarchical Multi-Modal Prompting Transformer for Multi-Modal Long Document Classification
    Liu, Tengfei
    Hu, Yongli
    Gao, Junbin
    Sun, Yanfeng
    Yin, Baocai
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (07) : 6376 - 6390
  • [5] Enhancing hyperspectral imaging through macro and multi-modal capabilities
    Ardini, Benedetto
    Corti, Matteo
    Ghirardello, Marta
    Di Benedetto, Alessia
    Berti, Letizia
    Catto, Cristina
    Goidanich, Sara
    Sciutto, Giorgia
    Prati, Silvia
    Valentini, Gianluca
    Manzoni, Cristian
    Comelli, Daniela
    Candeo, Alessia
    JOURNAL OF PHYSICS-PHOTONICS, 2024, 6 (03):
  • [6] Learning discriminative motion feature for enhancing multi-modal action
    Yang, Jianyu
    Huang, Yao
    Shao, Zhanpeng
    Liu, Chunping
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2021, 79
  • [7] Multi-modal anchor adaptation learning for multi-modal summarization
    Chen, Zhongfeng
    Lu, Zhenyu
    Rong, Huan
    Zhao, Chuanjun
    Xu, Fan
    NEUROCOMPUTING, 2024, 570
  • [8] Multi-modal long document classification based on Hierarchical Prompt and Multi-modal Transformer
    Liu, Tengfei
    Hu, Yongli
    Gao, Junbin
    Wang, Jiapu
    Sun, Yanfeng
    Yin, Baocai
    NEURAL NETWORKS, 2024, 176
  • [9] Multi-Modal Transformer and Reinforcement Learning-Based Beam Management
    Ghassemi, Mohammad
    Zhang, Han
    Afana, Ali
    Sediq, Akram Bin
    Erol-Kantarci, Melike
    IEEE Networking Letters, 2024, 6 (04): : 222 - 226
  • [10] Hierarchical Multi-Task Learning for Diagram Question Answering with Multi-Modal Transformer
    Yuan, Zhaoquan
    Peng, Xiao
    Wu, Xiao
    Xu, Changsheng
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 1313 - 1321