A Multi-Armed Bandit Hyper-Heuristic

被引:8
|
作者
Ferreira, Alexandre Silvestre [1 ]
Goncalves, Richard Aderbal [2 ]
Ramirez Pozo, Aurora Trinidad [1 ]
机构
[1] Univ Fed Parana, Dept Comp Sci, Curitiba, Parana, Brazil
[2] State Univ Ctr Oeste, Dept Comp Sci, Guarapuava, Brazil
关键词
Hyper-Heuristic; Multi-Armed Bandit; Combinatorial Optimization;
D O I
10.1109/BRACIS.2015.31
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Hyper-heuristics are search methods that aim to solve optimization problems by selecting or generating heuristics. Selection hyper-heuristics choose from a pool of heuristics a good one to be applied at the current stage of the optimization process. The selection mechanism is the main part of a selection hyperheuristic and have a great impact on its performance. In this paper a deterministic selection mechanism based on the concepts of the Multi-Armed Bandit (MAB) problem is proposed. The proposed approach is integrated into the HyFlex framework and is compared to twenty other hyper-heuristics using the methodology adapted by the CHeSC 2011 Challenge. The results obtained were good and comparable to those attained by the best hyper-heuristics. Therefore, it is possible to affirm that the use of a MAB mechanism as a selection method in a hyper-heuristic is a promising approach.
引用
收藏
页码:13 / 18
页数:6
相关论文
共 50 条
  • [1] A New Hyper-Heuristic based on a Restless Multi-Armed Bandit for Multi-Objective Optimization
    Goncalves, Richard
    Almeida, Carolina
    Venske, Sandra
    Delgado, Myriam
    Pozo, Aurora
    2017 6TH BRAZILIAN CONFERENCE ON INTELLIGENT SYSTEMS (BRACIS), 2017, : 390 - 395
  • [2] A New Hyper-Heuristic based on a Contextual Multi-Armed Bandit for Many-Objective Optimization
    Goncalves, Richard
    Almeida, Carolina
    Luders, Ricardo
    Delgado, Myriam
    2018 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC), 2018, : 997 - 1004
  • [3] A Multi-Armed Bandit Selection Strategy for Hyper-heuristics
    Ferreira, Alexandre Silvestre
    Goncalves, Richard Aderbal
    Pozo, Aurora
    2017 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC), 2017, : 525 - 532
  • [4] The multi-armed bandit, with constraints
    Eric V. Denardo
    Eugene A. Feinberg
    Uriel G. Rothblum
    Annals of Operations Research, 2013, 208 : 37 - 62
  • [5] Multi-armed bandit games
    Gursoy, Kemal
    ANNALS OF OPERATIONS RESEARCH, 2024,
  • [6] The multi-armed bandit, with constraints
    Denardo, Eric V.
    Feinberg, Eugene A.
    Rothblum, Uriel G.
    ANNALS OF OPERATIONS RESEARCH, 2013, 208 (01) : 37 - 62
  • [7] The Assistive Multi-Armed Bandit
    Chan, Lawrence
    Hadfield-Menell, Dylan
    Srinivasa, Siddhartha
    Dragan, Anca
    HRI '19: 2019 14TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2019, : 354 - 363
  • [8] Dynamic Multi-Armed Bandit with Covariates
    Pavlidis, Nicos G.
    Tasoulis, Dimitris K.
    Adams, Niall M.
    Hand, David J.
    ECAI 2008, PROCEEDINGS, 2008, 178 : 777 - +
  • [9] Scaling Multi-Armed Bandit Algorithms
    Fouche, Edouard
    Komiyama, Junpei
    Boehm, Klemens
    KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 1449 - 1459
  • [10] The budgeted multi-armed bandit problem
    Madani, O
    Lizotte, DJ
    Greiner, R
    LEARNING THEORY, PROCEEDINGS, 2004, 3120 : 643 - 645