Enhancing Parallelization with OpenMP through Multi-Modal Transformer Learning

被引:0
|
作者
Chen, Yuehua [1 ]
Yuan, Huaqiang [1 ]
Hou, Fengyao [2 ,3 ]
Hu, Peng [2 ,3 ]
机构
[1] Dongguan Univ Technol, Dongguan, Peoples R China
[2] Chinese Acad Sci, Inst High Energy Phys, Beijing, Peoples R China
[3] Spallat Neutron Source Sci Ctr, Dongguan, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
component; OpenMP; Natural Language Processing; Abstract Syntax Trees; Parallelization;
D O I
10.1109/ICCEA62105.2024.10603704
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The popularity of multicore processors and the rise of High Performance Computing as a Service (HPCaaS) have made parallel programming essential to fully utilize the performance of multicore systems. OpenMP, a widely adopted shared-memory parallel programming model, is favored for its ease of use. However, it is still challenging to assist and accelerate automation of its parallelization. Although existing automation tools such as Cetus and DiscoPoP to simplify the parallelization, there are still limitations when dealing with complex data dependencies and control flows. Inspired by the success of deep learning in the field of Natural Language Processing (NLP), this study adopts a Transformer-based model to tackle the problems of automatic parallelization of OpenMP instructions. We propose a novel Transformer-based multimodal model, ParaMP, to improve the accuracy of OpenMP instruction classification. The ParaMP model not only takes into account the sequential features of the code text, but also incorporates the code structural features and enriches the input features of the model by representing the Abstract Syntax Trees (ASTs) corresponding to the codes in the form of binary trees. In addition, we built a BTCode dataset, which contains a large number of C/C++ code snippets and their corresponding simplified AST representations, to provide a basis for model training. Experimental evaluation shows that our model outperforms other existing automated tools and models in key performance metrics such as F1 score and recall. This study shows a significant improvement on the accuracy of OpenMP instruction classification by combining sequential and structural features of code text, which will provide a valuable insight into deep learning techniques to programming tasks.
引用
收藏
页码:465 / 469
页数:5
相关论文
共 50 条
  • [41] Modelling multi-modal learning in a hawkmoth
    Balkenius, Anna
    Kelber, Almut
    Balkenius, Christian
    FROM ANIMALS TO ANIMATS 9, PROCEEDINGS, 2006, 4095 : 422 - 433
  • [42] Advancing Physics Learning Through Traversing a Multi-Modal Experimentation Space
    Kuhn, Jochen
    Nussbaumer, Alexander
    Pirker, Johanna
    Karatzas, Dimosthenis
    Pagani, Alain
    Conlan, Owen
    Memmel, Martin
    Steiner, Christina M.
    Guetl, Christian
    Albert, Dietrich
    Dengel, Andreas
    WORKSHOP PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON INTELLIGENT ENVIRONMENTS, 2015, 19 : 373 - 380
  • [43] MaPLe: Multi-modal Prompt Learning
    Khattak, Muhammad Uzair
    Rasheed, Hanoona
    Maaz, Muhammad
    Khan, Salman
    Khan, Fahad Shahbaz
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 19113 - 19122
  • [44] Multi-Modal Convolutional Dictionary Learning
    Gao, Fangyuan
    Deng, Xin
    Xu, Mai
    Xu, Jingyi
    Dragotti, Pier Luigi
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 1325 - 1339
  • [45] RetrievalMMT: Retrieval-Constrained Multi-Modal Prompt Learning for Multi-Modal Machine Translation
    Wang, Yan
    Zeng, Yawen
    Liang, Junjie
    Xing, Xiaofen
    Xu, Jin
    Xu, Xiangmin
    PROCEEDINGS OF THE 4TH ANNUAL ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2024, 2024, : 860 - 868
  • [46] Enhancing sensory memory through multi-modal stimulation on a finger training and assessment device
    Mauracher, Dorothea
    Eder, Jasmin
    Stein, Ruben
    Zamarian, Laura
    Kim, Yeongmi
    2024 10TH IEEE RAS/EMBS INTERNATIONAL CONFERENCE FOR BIOMEDICAL ROBOTICS AND BIOMECHATRONICS, BIOROB 2024, 2024, : 1473 - 1478
  • [47] Enhancing glaucoma detection through multi-modal integration of retinal images and clinical biomarkers
    Sivakumar, Rishikesh
    Penkova, Anita
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 143
  • [48] Enhancing Acute Bilirubin Encephalopathy Diagnosis with Multi-Modal MRI: A Deep Learning Approach
    Zhang, Huan
    Xia, Shunren
    APPLIED SCIENCES-BASEL, 2024, 14 (06):
  • [49] A multi-modal pre-training transformer for universal transfer learning in metal–organic frameworks
    Yeonghun Kang
    Hyunsoo Park
    Berend Smit
    Jihan Kim
    Nature Machine Intelligence, 2023, 5 : 309 - 318
  • [50] A Multi-Modal Emotion Recognition System Based on CNN-Transformer Deep Learning Technique
    Karatay, Busra
    Bestepe, Deniz
    Sailunaz, Kashfia
    Ozyer, Tansel
    Alhajj, Reda
    2022 7TH INTERNATIONAL CONFERENCE ON DATA SCIENCE AND MACHINE LEARNING APPLICATIONS (CDMA 2022), 2022, : 145 - 150