Efficient Dynamic Pinning of Parallelized Applications by Distributed Reinforcement Learning

被引:0
|
作者
Georgios C. Chasparis
Michael Rossbory
机构
[1] Software Competence Center Hagenberg GmbH,
关键词
Dynamic pinning; Reinforcement learning; Parallel applications;
D O I
暂无
中图分类号
学科分类号
摘要
This paper introduces a resource allocation framework specifically tailored for addressing the problem of dynamic placement (or pinning) of parallelized applications to processing units. Under the proposed setup each thread of the parallelized application constitutes an independent decision maker (or agent), which (based on its own prior performance measurements and its own prior CPU-affinities) decides on which processing unit to run next. Decisions are updated recursively for each thread by a resource manager/scheduler which runs in parallel to the application’s threads and periodically records their performances and assigns to them new CPU affinities. For updating the CPU-affinities, the scheduler uses a distributed reinforcement-learning algorithm, each branch of which is responsible for assigning a new placement strategy to each thread. The proposed framework is flexible enough to address alternative optimization criteria, such as maximum average processing speed and minimum speed variance among threads. We demonstrate analytically that convergence to locally-optimal placements is achieved asymptotically. Finally, we validate these results through experiments in Linux platforms.
引用
收藏
页码:24 / 38
页数:14
相关论文
共 50 条
  • [1] Efficient Dynamic Pinning of Parallelized Applications by Distributed Reinforcement Learning
    Chasparis, Georgios C.
    Rossbory, Michael
    INTERNATIONAL JOURNAL OF PARALLEL PROGRAMMING, 2019, 47 (01) : 24 - 38
  • [2] Efficient Dynamic Pinning of Parallelized Applications by Reinforcement Learning with Applications
    Chasparis, Georgios C.
    Rossbory, Michael
    Janjic, Vladimir
    EURO-PAR 2017: PARALLEL PROCESSING, 2017, 10417 : 164 - 176
  • [3] Learning-based Dynamic Pinning of Parallelized Applications in Many-Core Systems
    Chasparis, Georgios C.
    Janjic, Vladimir
    Rossbory, Michael
    Hammond, Kevin
    2019 27TH EUROMICRO INTERNATIONAL CONFERENCE ON PARALLEL, DISTRIBUTED AND NETWORK-BASED PROCESSING (PDP), 2019, : 1 - 8
  • [4] Efficient Distributed Reinforcement Learning through Agreement
    Varshavskaya, Paulina
    Kaelbling, Leslie Pack
    Rus, Daniela
    DISTRIBUTED AUTONOMOUS ROBOTIC SYSTEMS 8, 2009, : 367 - 378
  • [5] Distributed economic pinning control based on deep reinforcement learning for isolated microgrids
    Tang, Chengye
    Zhao, Jianfeng
    ELECTRIC POWER SYSTEMS RESEARCH, 2025, 241
  • [6] Distributed Reinforcement Learning for Flexible and Efficient UAV Swarm Control
    Venturini, Federico
    Mason, Federico
    Pase, Francesco
    Chiariotti, Federico
    Testolin, Alberto
    Zanella, Andrea
    Zorzi, Michele
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2021, 7 (03) : 955 - 969
  • [7] Efficient Reinforcement Learning Method for Dynamic System Control
    Park C.
    Jeong C.
    Yoo J.
    Kang C.M.
    Transactions of the Korean Institute of Electrical Engineers, 2022, 71 (09): : 1293 - 1301
  • [8] Dynamic Obstacle Avoidance by Distributed Algorithm based on Reinforcement Learning
    Yaghmaee, F.
    Koohi, H. Reza
    INTERNATIONAL JOURNAL OF ENGINEERING, 2015, 28 (02): : 198 - 204
  • [9] Reinforcement learning applications in dynamic pricing of retail markets
    Raju, CVL
    Narahari, Y
    Ravikumar, K
    IEEE INTERNATIONAL CONFERENCE ON E-COMMERCE, 2003, : 339 - 346
  • [10] DISTRIBUTED REINFORCEMENT LEARNING
    WEISS, G
    ROBOTICS AND AUTONOMOUS SYSTEMS, 1995, 15 (1-2) : 135 - 142