A Multi-Armed Bandit Hyper-Heuristic

被引:8
|
作者
Ferreira, Alexandre Silvestre [1 ]
Goncalves, Richard Aderbal [2 ]
Ramirez Pozo, Aurora Trinidad [1 ]
机构
[1] Univ Fed Parana, Dept Comp Sci, Curitiba, Parana, Brazil
[2] State Univ Ctr Oeste, Dept Comp Sci, Guarapuava, Brazil
关键词
Hyper-Heuristic; Multi-Armed Bandit; Combinatorial Optimization;
D O I
10.1109/BRACIS.2015.31
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Hyper-heuristics are search methods that aim to solve optimization problems by selecting or generating heuristics. Selection hyper-heuristics choose from a pool of heuristics a good one to be applied at the current stage of the optimization process. The selection mechanism is the main part of a selection hyperheuristic and have a great impact on its performance. In this paper a deterministic selection mechanism based on the concepts of the Multi-Armed Bandit (MAB) problem is proposed. The proposed approach is integrated into the HyFlex framework and is compared to twenty other hyper-heuristics using the methodology adapted by the CHeSC 2011 Challenge. The results obtained were good and comparable to those attained by the best hyper-heuristics. Therefore, it is possible to affirm that the use of a MAB mechanism as a selection method in a hyper-heuristic is a promising approach.
引用
收藏
页码:13 / 18
页数:6
相关论文
共 50 条
  • [41] Multi-Armed Recommender System Bandit Ensembles
    Canamares, Rocio
    Redondo, Marcos
    Castells, Pablo
    RECSYS 2019: 13TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, 2019, : 432 - 436
  • [42] Multi-armed bandit problem with known trend
    Bouneffouf, Djallel
    Feraud, Raphael
    NEUROCOMPUTING, 2016, 205 : 16 - 21
  • [43] Ambiguity aversion in multi-armed bandit problems
    Christopher M. Anderson
    Theory and Decision, 2012, 72 : 15 - 33
  • [44] Variational inference for the multi-armed contextual bandit
    Urteaga, Inigo
    Wiggins, Chris H.
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 84, 2018, 84
  • [45] An Incentive-Compatible Multi-Armed Bandit Mechanism
    Gonen, Rica
    Pavlov, Elan
    PODC'07: PROCEEDINGS OF THE 26TH ANNUAL ACM SYMPOSIUM ON PRINCIPLES OF DISTRIBUTED COMPUTING, 2007, : 362 - 363
  • [46] Achieving Fairness in the Stochastic Multi-Armed Bandit Problem
    Patil, Vishakha
    Ghalme, Ganesh
    Nair, Vineet
    Narahari, Y.
    JOURNAL OF MACHINE LEARNING RESEARCH, 2021, 22
  • [47] Gaussian multi-armed bandit problems with multiple objectives
    Reverdy, Paul
    2016 AMERICAN CONTROL CONFERENCE (ACC), 2016, : 5263 - 5269
  • [48] Decentralized Multi-Armed Bandit with Multiple Distributed Players
    Liu, Keqin
    Zhao, Qing
    2010 INFORMATION THEORY AND APPLICATIONS WORKSHOP (ITA), 2010, : 568 - 577
  • [49] Adaptive Active Learning as a Multi-armed Bandit Problem
    Czarnecki, Wojciech M.
    Podolak, Igor T.
    21ST EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE (ECAI 2014), 2014, 263 : 989 - 990
  • [50] Multi-armed Bandit Algorithm against Strategic Replication
    Shin, Suho
    Lee, Seungjoon
    Ok, Jungseul
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151 : 403 - 431