No DBA? No Regret! Multi-Armed Bandits for Index Tuning of Analytical and HTAP Workloads With Provable Guarantees

被引:4
|
作者
Perera, R. Malinga [1 ]
Oetomo, Bastian [1 ]
Rubinstein, Benjamin I. P. [1 ]
Borovica-Gajic, Renata [1 ]
机构
[1] Univ Melbourne, Parkville, Vic 3010, Australia
基金
澳大利亚研究理事会;
关键词
Indexes; Databases; Tuning; Physical design; Costs; Design tools; Uncertainty; HTAP; index tuning; multi-armed bandits; physical design tuning; reinforcement learning; SELECTION; DATABASE;
D O I
10.1109/TKDE.2023.3271664
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Automating physical database design has remained a long-term interest in database research due to substantial performance gains afforded by optimised structures. Despite significant progress, a majority of today's commercial solutions are highly manual, requiring offline invocation by database administrators (DBAs). This status quo is untenable: identifying representative static workloads is no longer realistic; and physical design tools remain susceptible to the query optimiser's cost misestimates. Furthermore, modern application environments like hybrid transactional and analytical processing (HTAP) systems render analytical modelling next to impossible. We propose a self-driving approach to online index selection that does not depend on the DBA and query optimiser, and instead learns the benefits of viable structures through strategic exploration and direct performance observation. We view the problem as one of sequential decision making under uncertainty, specifically within the bandit learning setting. Multi-armed bandits balance exploration and exploitation to provably guarantee average performance that converges to policies that are optimal with perfect hindsight. Our comprehensive empirical evaluation against a state-of-the-art commercial tuning tool demonstrates up to 75% speed-up in analytical processing environments and 59% speed-up in HTAP environments. Lastly, our bandit framework outperforms a Monte Carlo tree search (MCTS)-based database optimiser, providing up to 24% speed-up.
引用
收藏
页码:12855 / 12872
页数:18
相关论文
共 21 条
  • [1] Lenient Regret for Multi-Armed Bandits
    Merlis, Nadav
    Mannor, Shie
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 8950 - 8957
  • [2] DBA bandits: Self-driving index tuning under ad-hoc, analytical workloads with safety guarantees
    Perera, R. Malinga
    Oetomo, Bastian
    Rubinstein, Benjamin I. P.
    Borovica-Gajic, Renata
    2021 IEEE 37TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2021), 2021, : 600 - 611
  • [3] Fairness and Welfare Quantification for Regret in Multi-Armed Bandits
    Barman, Siddharth
    Khan, Arindam
    Maiti, Arnab
    Sawarni, Ayush
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 6, 2023, : 6762 - 6769
  • [4] Bounded Regret for Finitely Parameterized Multi-Armed Bandits
    Panaganti, Kishan
    Kalathil, Dileep
    IEEE CONTROL SYSTEMS LETTERS, 2021, 5 (03): : 1073 - 1078
  • [5] MULTI-ARMED BANDITS AND THE GITTINS INDEX
    WHITTLE, P
    JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-METHODOLOGICAL, 1980, 42 (02): : 143 - 149
  • [6] Individual Regret in Cooperative Nonstochastic Multi-Armed Bandits
    Bar-On, Yogev
    Mansour, Yishay
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [7] Constrained regret minimization for multi-criterion multi-armed bandits
    Kagrecha, Anmol
    Nair, Jayakrishnan
    Jagannathan, Krishna
    MACHINE LEARNING, 2023, 112 (02) : 431 - 458
  • [8] Strategies for Safe Multi-Armed Bandits with Logarithmic Regret and Risk
    Chen, Tianrui
    Gangrade, Aditya
    Saligrama, Venkatesh
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [9] Constrained regret minimization for multi-criterion multi-armed bandits
    Anmol Kagrecha
    Jayakrishnan Nair
    Krishna Jagannathan
    Machine Learning, 2023, 112 : 431 - 458
  • [10] Communication-Efficient Collaborative Regret Minimization in Multi-Armed Bandits
    Karpov, Nikolai
    Zhang, Qin
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 12, 2024, : 13076 - 13084