Ensemble reinforcement learning: A survey

被引:8
|
作者
Song, Yanjie [1 ]
Suganthan, Ponnuthurai Nagaratnam [2 ]
Pedrycz, Witold [3 ,4 ,5 ]
Ou, Junwei [1 ]
He, Yongming [1 ]
Chen, Yingwu [1 ]
Wu, Yutong [6 ]
机构
[1] Natl Univ Def Technol, Coll Syst Engn, Changsha, Peoples R China
[2] Qatar Univ, Coll Engn, KINDI Ctr Comp Res, Doha, Qatar
[3] Univ Alberta, Dept Elect & Comp Engn, Edmonton, AB, Canada
[4] Polish Acad Sci, Syst Res Inst, Warsaw, Poland
[5] Fac Engn & Nat Sci, Dept Comp Engn, Istanbul, Turkiye
[6] Univ Kent, Dept Analyt Operat & Syst, Canterbury, England
基金
中国国家自然科学基金;
关键词
Ensemble reinforcement learning; Reinforcement learning; Ensemble learning; Artificial neural network; Ensemble strategy; NEURAL-NETWORKS; LEVEL;
D O I
10.1016/j.asoc.2023.110975
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Reinforcement Learning (RL) has emerged as a highly effective technique for addressing various scientific and applied problems. Despite its success, certain complex tasks remain challenging to be addressed solely with a single model and algorithm. In response, ensemble reinforcement learning (ERL), a promising approach that combines the benefits of both RL and ensemble learning (EL), has gained widespread popularity. ERL leverages multiple models or training algorithms to comprehensively explore the problem space and possesses strong generalization capabilities. In this study, we present a comprehensive survey on ERL to provide readers with an overview of recent advances and challenges in the field. Firstly, we provide an introduction to the background and motivation for ERL. Secondly, we conduct a detailed analysis of strategies such as model selection and combination that have been successfully implemented in ERL. Subsequently, we explore the application of ERL, summarize the datasets, and analyze the algorithms employed. Finally, we outline several open questions and discuss future research directions of ERL. By offering guidance for future scientific research and engineering applications, this survey significantly contributes to the advancement of ERL.
引用
收藏
页数:16
相关论文
共 50 条
  • [41] A Survey on ensemble learning under the era of deep learning
    Yongquan Yang
    Haijun Lv
    Ning Chen
    [J]. Artificial Intelligence Review, 2023, 56 : 5545 - 5589
  • [42] Adaptive evolution strategy with ensemble of mutations for Reinforcement Learning
    Ajani, Oladayo S.
    Mallipeddi, Rammohan
    [J]. Knowledge-Based Systems, 2022, 245
  • [43] SUNRISE: A Simple Unified Framework for Ensemble Learning in Deep Reinforcement Learning
    Lee, Kimin
    Laskin, Michael
    Srinivas, Aravind
    Abbeel, Pieter
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [44] Anomaly Detection in Structural Health Monitoring with Ensemble Learning and Reinforcement Learning
    Huang, Nan
    [J]. INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2024, 15 (01) : 121 - 135
  • [45] Reinforcement learning for ridesharing: An extended survey
    Qin, Zhiwei
    Zhu, Hongtu
    Ye, Jieping
    [J]. TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2022, 144
  • [46] Hierarchical Reinforcement Learning: A Comprehensive Survey
    Pateria, Shubham
    Subagdja, Budhitama
    Tan, Ah-hwee
    Quek, Chai
    [J]. ACM COMPUTING SURVEYS, 2021, 54 (05)
  • [47] A survey on Evolutionary Reinforcement Learning algorithms
    Zhu, Qingling
    Wu, Xiaoqiang
    Lin, Qiuzhen
    Ma, Lijia
    Li, Jianqiang
    Ming, Zhong
    Chen, Jianyong
    [J]. NEUROCOMPUTING, 2023, 556
  • [48] A Survey on Reinforcement Learning for Recommender Systems
    Lin, Yuanguo
    Liu, Yong
    Lin, Fan
    Zou, Lixin
    Wu, Pengcheng
    Zeng, Wenhua
    Chen, Huanhuan
    Miao, Chunyan
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, : 1 - 21
  • [49] Reinforcement Learning Interpretation Methods: A Survey
    Alharin, Alnour
    Doan, Thanh-Nam
    Sartipi, Mina
    [J]. IEEE ACCESS, 2020, 8 : 171058 - 171077
  • [50] Survey on policy reuse in reinforcement learning
    He, Li
    Shen, Liang
    Li, Hui
    Wang, Zhuang
    Tang, Wenquan
    [J]. Xi Tong Gong Cheng Yu Dian Zi Ji Shu/Systems Engineering and Electronics, 2022, 44 (03): : 884 - 899