Automating the Configuration of MapReduce: A Reinforcement Learning Scheme

被引:8
|
作者
Mu, Ting-Yu [1 ]
Al-Fuqaha, Ala [1 ,2 ]
Salah, Khaled [3 ]
机构
[1] Western Michigan Univ, Comp Sci Dept, Kalamazoo, MI 49008 USA
[2] Hamad Bin Khalifa Univ, Coll Sci & Engn, Doha, Qatar
[3] Khalifa Univ Sci & Technol, Elect & Comp Engn Dept, Abu Dhabi, U Arab Emirates
关键词
Deep learning; deep Q-network (DQN); machine learning; MapReduce; neural networks; reinforcement learning (RL); self-configuration;
D O I
10.1109/TSMC.2019.2951789
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the exponential growth of data and the high demand for the analysis of large datasets, the MapReduce framework has been widely utilized to process data in a timely, cost-effective manner. It is well-known that the performance of MapReduce is limited by its default configuration parameters, and there are a few research studies that have focused on finding the optimal configurations to improve the performance of the MapReduce framework. Recently, machine learning based approaches have been receiving more attention to be utilized to auto configure the MapReduce parameters to account for the dynamic nature of the applications. In this article, we propose and develop a reinforcement learning (RL)-based scheme, named RL-MRCONF, to automatically configure the MapReduce parameters. Specifically, we explore and experiment with two variations of RL-MRCONF; one variation is based on the traditional RL algorithm and the second is based on the deep RL algorithm. Results obtained from simulations show that the RL-MRCONF has the ability to successfully and effectively auto-configure the MapReduce parameters dynamically according to changes in job types and computing resources. Moreover, simulation results show our proposed RL-MRCONF scheme outperforms the traditional RL-based implementation. Using datasets provided by MR-Perf, simulation results show that our proposed scheme provides around 50% performance improvement in terms of execution time when compared with MapReduce using default settings.
引用
收藏
页码:4183 / 4196
页数:14
相关论文
共 50 条
  • [31] INFLUENCE OF SPATIAL CONFIGURATION AND PERCENTAGE OF REINFORCEMENT UPON ODDITY LEARNING
    LOCKHART, JM
    HARLOW, HF
    [J]. JOURNAL OF COMPARATIVE AND PHYSIOLOGICAL PSYCHOLOGY, 1962, 55 (04): : 495 - &
  • [32] Real-Time Lane Configuration with Coordinated Reinforcement Learning
    Gunarathna, Udesh
    Xie, Hairuo
    Tanin, Egemen
    Karunasekara, Shanika
    Borovica-Gajic, Renata
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: APPLIED DATA SCIENCE TRACK, ECML PKDD 2020, PT IV, 2021, 12460 : 291 - 307
  • [33] Prioritized Environment Configuration for Drone Control with Deep Reinforcement Learning
    Jang, Sooyoung
    Choi, Changbeom
    [J]. HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES, 2022, 12
  • [34] A Novel Reinforcement Learning Approach for Spark Configuration Parameter Optimization
    Huang, Xu
    Zhang, Hong
    Zhai, Xiaomeng
    [J]. SENSORS, 2022, 22 (15)
  • [35] Simulink Compiler Testing via Configuration Diversification With Reinforcement Learning
    Li, Xiaochen
    Guo, Shikai
    Cheng, Hongyi
    Jiang, He
    [J]. IEEE TRANSACTIONS ON RELIABILITY, 2024, 73 (02) : 1060 - 1074
  • [36] Invited: Efficient Reinforcement Learning for Automating Human Decision-Making in SoC Design
    Sadasivam, Shankar
    Chen, Zhuo
    Lee, Jinwon
    Jain, Rajeev
    [J]. 2018 55TH ACM/ESDA/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2018,
  • [37] Effects of evolutionary configuration of reinforcement learning applied to airship control
    Motoyama, K
    Suzuki, K
    Yamamoto, M
    Ohuchi, A
    [J]. INTELLIGENT AUTONOMOUS SYSTEMS 6, 2000, : 567 - 572
  • [38] ACES: Automatic Configuration of Energy Harvesting Sensors with Reinforcement Learning
    Fraternali, Francesco
    Balaji, Bharathan
    Agarwal, Yuvraj
    Gupta, Rajesh K.
    [J]. ACM TRANSACTIONS ON SENSOR NETWORKS, 2020, 16 (04)
  • [39] An efficient reinforcement learning scheme for the confinement escape problem
    Gurumurthy, Vignesh
    Mohanty, Nishant
    Sundaram, Suresh
    Sundararajan, Narasimhan
    [J]. APPLIED SOFT COMPUTING, 2024, 152
  • [40] An optimized differential privacy scheme with reinforcement learning in VANET
    Chen, Xin
    Zhang, Tao
    Shen, Sheng
    Zhu, Tianqing
    Xiong, Ping
    [J]. COMPUTERS & SECURITY, 2021, 110