Enhancing Parallelization with OpenMP through Multi-Modal Transformer Learning

被引:0
|
作者
Chen, Yuehua [1 ]
Yuan, Huaqiang [1 ]
Hou, Fengyao [2 ,3 ]
Hu, Peng [2 ,3 ]
机构
[1] Dongguan Univ Technol, Dongguan, Peoples R China
[2] Chinese Acad Sci, Inst High Energy Phys, Beijing, Peoples R China
[3] Spallat Neutron Source Sci Ctr, Dongguan, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
component; OpenMP; Natural Language Processing; Abstract Syntax Trees; Parallelization;
D O I
10.1109/ICCEA62105.2024.10603704
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The popularity of multicore processors and the rise of High Performance Computing as a Service (HPCaaS) have made parallel programming essential to fully utilize the performance of multicore systems. OpenMP, a widely adopted shared-memory parallel programming model, is favored for its ease of use. However, it is still challenging to assist and accelerate automation of its parallelization. Although existing automation tools such as Cetus and DiscoPoP to simplify the parallelization, there are still limitations when dealing with complex data dependencies and control flows. Inspired by the success of deep learning in the field of Natural Language Processing (NLP), this study adopts a Transformer-based model to tackle the problems of automatic parallelization of OpenMP instructions. We propose a novel Transformer-based multimodal model, ParaMP, to improve the accuracy of OpenMP instruction classification. The ParaMP model not only takes into account the sequential features of the code text, but also incorporates the code structural features and enriches the input features of the model by representing the Abstract Syntax Trees (ASTs) corresponding to the codes in the form of binary trees. In addition, we built a BTCode dataset, which contains a large number of C/C++ code snippets and their corresponding simplified AST representations, to provide a basis for model training. Experimental evaluation shows that our model outperforms other existing automated tools and models in key performance metrics such as F1 score and recall. This study shows a significant improvement on the accuracy of OpenMP instruction classification by combining sequential and structural features of code text, which will provide a valuable insight into deep learning techniques to programming tasks.
引用
收藏
页码:465 / 469
页数:5
相关论文
共 50 条
  • [21] Entertaining users through multi-modal game based learning
    Ayad, Khaled
    Rigas, Dimitrios
    INTERNATIONAL JOURNAL OF EDUCATION AND INFORMATION TECHNOLOGIES, 2010, 4 (02): : 65 - 72
  • [22] A Multi-modal Approach for Enhancing Object Placement
    Srimal, P. H. D. Arjuna S.
    Jayasekara, A. G. Buddhika P.
    PROCEEDINGS OF THE 2017 6TH NATIONAL CONFERENCE ON TECHNOLOGY & MANAGEMENT (NCTM) - EXCEL IN RESEARCH AND BUILD THE NATION, 2017, : 17 - 22
  • [23] Multi-modal and multi-granular learning
    Zhang, Bo
    Zhang, Ling
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PROCEEDINGS, 2007, 4426 : 9 - +
  • [24] Multi-Modal Graph Aggregation Transformer for image captioning
    Chen, Lizhi
    Li, Kesen
    NEURAL NETWORKS, 2025, 181
  • [25] Multi-modal Transformer for Indoor Human Action Recognition
    Do, Jeonghyeok
    Kim, Munchurl
    2022 22ND INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS 2022), 2022, : 1155 - 1160
  • [26] A MULTI-MODAL TRANSFORMER APPROACH FOR FOOTBALL EVENT CLASSIFICATION
    Zhang, Yixiao
    Li, Baihua
    Fang, Hui
    Meng, Qinggang
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 2220 - 2224
  • [27] Multi-Modal Dynamic Graph Transformer for Visual Grounding
    Chen, Sijia
    Li, Baochun
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 15513 - 15522
  • [28] UniColor: A Unified Framework for Multi-Modal Colorization with Transformer
    Huang, Zhitong
    Zhao, Nanxuan
    Liao, Jing
    ACM TRANSACTIONS ON GRAPHICS, 2022, 41 (06):
  • [29] Multi-modal object detection via transformer network
    Liu, Wenbing
    Wang, Haibo
    Gao, Quanxue
    Zhu, Zhaorui
    IET IMAGE PROCESSING, 2023, 17 (12) : 3541 - 3550
  • [30] On Pursuit of Designing Multi-modal Transformer for Video Grounding
    Cao, Meng
    Chen, Long
    Shou, Zheng
    Zhang, Can
    Zou, Yuexian
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 9810 - 9823