Enhancing HLS Performance Prediction on FPGAs Through Multimodal Representation Learning

被引:0
|
作者
Shang, Longshan [1 ]
Wang, Teng [1 ]
Gong, Lei [1 ]
Wang, Chao [1 ]
Zhou, Xuehai [1 ]
机构
[1] Univ Sci & Technol China, Suzhou Inst Adv Res, Suzhou, Jiangsu, Peoples R China
基金
中国国家自然科学基金;
关键词
Design space exploration (DSE); high-level synthesis (HLS); multimodality; HIGH-LEVEL SYNTHESIS;
D O I
10.1109/LES.2024.3446797
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The emergence of design space exploration (DSE) technology has reduced the cost of searching for pragma configurations that lead to optimal performance microarchitecture. However, obtaining synthesis reports for a single design candidate can be time-consuming, sometimes taking several hours or even tens of hours, rendering this process prohibitively expensive. Researchers have proposed many solutions to address this issue. Previous studies have focused on extracting features from a single modality, leading to challenges in comprehensively evaluating the quality of designs. To overcome this limitation, this letter introduces a novel modal-aware representation learning method for the evaluation of high-level synthesis (HLS) design, named MORPH, which integrates information from three data modalities to characterize HLS designs, including code, graph, and code description (caption) modality. Remarkably, our model outperforms the baseline, demonstrating a 6%-25% improvement in root mean squared error loss. Moreover, the transferability of our predictor has also been notably enhanced.
引用
收藏
页码:385 / 388
页数:4
相关论文
共 50 条
  • [1] Enhancing multimodal depression diagnosis through representation learning and knowledge transfer
    Yang, Shanliang
    Cui, Lichao
    Wang, Lei
    Wang, Tao
    You, Jiebing
    HELIYON, 2024, 10 (04)
  • [2] Adversarial Multimodal Representation Learning for Click-Through Rate Prediction
    Li, Xiang
    Wang, Chao
    Tan, Jiwei
    Zeng, Xiaoyi
    Ou, Dan
    Zheng, Bo
    WEB CONFERENCE 2020: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2020), 2020, : 827 - 836
  • [3] Enhancing multimodal deep representation learning by fixed model reuse
    Zhongwei Xie
    Lin Li
    Xian Zhong
    Yang He
    Luo Zhong
    Multimedia Tools and Applications, 2019, 78 : 30769 - 30791
  • [4] Learning Comprehensive Multimodal Representation for Cancer Survival Prediction
    Wu, Xingqi
    Shi, Yi
    Liu, Honglei
    Li, Ao
    Wang, Minghui
    2022 5TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND NATURAL LANGUAGE PROCESSING, MLNLP 2022, 2022, : 332 - 336
  • [5] Enhancing multimodal deep representation learning by fixed model reuse
    Xie, Zhongwei
    Li, Lin
    Zhong, Xian
    He, Yang
    Zhong, Luo
    MULTIMEDIA TOOLS AND APPLICATIONS, 2019, 78 (21) : 30769 - 30791
  • [6] Deep learning with multimodal representation for pancancer prognosis prediction
    Cheerla, Anika
    Gevaert, Olivier
    BIOINFORMATICS, 2019, 35 (14) : I446 - I454
  • [7] Enhancing visual communication through representation learning
    Wei, Yuhan
    Lee, Changwook
    Han, Seokwon
    Kim, Anna
    FRONTIERS IN NEUROSCIENCE, 2024, 18
  • [8] Enhancing multimodal learning through personalized gesture recognition
    Junokas, M. J.
    Lindgren, R.
    Kang, J.
    Morphew, J. W.
    JOURNAL OF COMPUTER ASSISTED LEARNING, 2018, 34 (04) : 350 - 357
  • [9] Enhancing diagnosis prediction with adaptive disease representation learning
    Cheng, Hengliang
    Li, Shibo
    Shen, Tao
    Li, Weihua
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2025, 163
  • [10] Accurate Performance and Power Prediction for FPGAs Using Machine Learning
    Sawalha, Lina
    Abuaita, Tawfiq
    Cowley, Martin
    Akhmatdinov, Sergei
    Dubs, Adam
    2022 IEEE 30TH INTERNATIONAL SYMPOSIUM ON FIELD-PROGRAMMABLE CUSTOM COMPUTING MACHINES (FCCM 2022), 2022, : 228 - 228