Exploiting parallelism in memory operations for code optimization

被引:0
|
作者
Paek, Y [1 ]
Choi, J
Joung, J
Lee, J
Kim, S
机构
[1] Seoul Natl Univ, Sch Elect Engn, Seoul 151744, South Korea
[2] Samsung Adv Inst Technol, Yongin 449712, Gyeonggi Do, South Korea
[3] Korea Univ, Dept Elect & Comp Engn, Seoul 136701, South Korea
关键词
D O I
10.1007/11532378_11
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Code size reduction is ever becoming more important for compilers targeting embedded processors because these processors are often severely limited by storage constraints and thus the reduced code size can have a positively significant impact on their performance. Various code size reduction techniques have different motivations and a variety of application contexts utilizing special hardware features of their target processors. In this work, we propose a novel technique that fully utilizes a set of hardware instructions, called the multiple load/store (MLS) or parallel load/store (PLS), that are specially featured for reducing code size by minimizing the number of memory operations in the code. To take advantage of this feature, many microprocessors support the MLS instructions, whereas no existing compilers fully exploit the potential benefit of these instructions but only use them for some limited cases. This is mainly because optimizing memory accesses with MLS instructions for general cases is an NP-hard problem that necessitates complex assignments of registers and memory offsets for variables in a stack frame. Our technique uses a couple of heuristics to efficiently handle this problem in a polynomial time bound.
引用
收藏
页码:132 / 148
页数:17
相关论文
共 50 条
  • [41] Exploiting Hyper-Loop Parallelism in Vectorization to Improve Memory Performance on CUDA GPGPU
    Xu, Shixiong
    Gregg, David
    2015 IEEE TRUSTCOM/BIGDATASE/ISPA, VOL 3, 2015, : 53 - 60
  • [42] A Novel NAND Flash Memory Architecture for Maximally Exploiting Plane-Level Parallelism
    Kim, Myeongjin
    Jung, Wontaeck
    Lee, Hyuk-Jun
    Chung, Eui-Young
    IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2019, 27 (08) : 1957 - 1961
  • [43] Exploiting Parallelism with Vertex-Clustering in Processing-In-Memory-based GCN Accelerators
    Zhu, Yu
    Zhu, Zhenhua
    Dai, Guohao
    Zhong, Kai
    Yang, Huazhong
    Wang, Yu
    PROCEEDINGS OF THE 2022 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2022), 2022, : 652 - 657
  • [44] Run-time techniques for exploiting irregular task parallelism on distributed memory architectures
    Fu, C
    Yang, T
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 1997, 42 (02) : 143 - 156
  • [45] Exploiting Redundancy, Recurrency and Parallelism: How to Link Millions of Addresses with Ten Lines of Code in Ten Minutes
    Zhang, Yuhang
    Churchill, Tania
    Ng, Kee Siong
    DATA MINING, AUSDM 2017, 2018, 845 : 107 - 122
  • [46] Exploiting GPU parallelism in improving bees swarm optimization for mining big transactional databases
    Djenouri, Youcef
    Djenouri, Djamel
    Belhadi, Asma
    Fournier-Viger, Philippe
    Lin, Jerry Chun-Wei
    Bendjoudi, Ahcene
    INFORMATION SCIENCES, 2019, 496 : 326 - 342
  • [47] Exploiting Loop Parallelism with Redundant Execution
    唐卫宇
    施武
    臧斌宇
    朱传琪
    Journal of Computer Science and Technology, 1997, (02) : 105 - 112
  • [48] EXPLOITING TASK AND DATA PARALLELISM ON A MULTICOMPUTER
    SUBHLOK, J
    STICHNOTH, JM
    OHALLARON, DR
    GROSS, T
    SIGPLAN NOTICES, 1993, 28 (07): : 13 - 22
  • [49] Exploiting Parallelism in Coalgebraic Logic Programming
    Komendantskaya, Ekaterina
    Schmidt, Martin
    Heras, Jonathan
    ELECTRONIC NOTES IN THEORETICAL COMPUTER SCIENCE, 2014, 303 (303) : 121 - 148
  • [50] Exploiting parallelism in interactive theorem provers
    Moten, R
    THEOREM PROVING IN HIGHER ORDER LOGICS, 1998, 1479 : 315 - 330