Polya tree Monte Carlo method

被引:0
|
作者
Zhuang, Haoxin [1 ]
Diao, Liqun [1 ]
Yi, Grace Y. [2 ]
机构
[1] Univ Waterloo, Dept Stat & Actuarial Sci, 200 Univ Ave West, Waterloo, ON N2L 3G1, Canada
[2] Univ Western Ontario, Dept Stat & Actuarial Sci, Dept Comp Sci, 1151 Richmond St, London, ON N6A 5B7, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
Gibbs sampler; Markov Chain Monte Carlo; Metropolis-Hasting algorithm; Polya trees; Sampling from a distribution; INDEPENDENT METROPOLIS-HASTINGS; DISTRIBUTIONS; MIXTURES; EFFICIENT;
D O I
10.1016/j.csda.2022.107665
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Markov Chain Monte Carlo (MCMC) methods have been widely used in Statistics and ma-chine learning research. However, such methods have several limitations, including slow convergence and the inefficiency in handling multi-modal distributions. To overcome these limitations of MCMC methods, a new, efficient sampling method has been proposed and it applies to general distributions including multi-modal ones or those having complex struc-ture. The proposed approach, called the Polya tree Monte Carlo (PTMC) method, roots in constructing a Polya tree distribution using the idea of Monte Carlo method, and then using this distribution to approximate and facilitate sampling from a target distribution that may be complex or have multiple modes. The associated convergence property of the PTMC method is established and computationally efficient sampling algorithms are developed based on the PTMC. Extensive numerical studies demonstrate the satisfactory performance of the proposed method under various settings including its superiority to the usual MCMC algorithms. The evaluation and comparison are carried out in terms of sampling efficiency, computational speed and the capacity of identifying distribution modes. Additional details about the method, proofs and simulation results are provided in the Supplementary Web Appendices online. (c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:16
相关论文
共 50 条
  • [31] Incentive Learning in Monte Carlo Tree Search
    Kao, Kuo-Yuan
    Wu, I-Chen
    Yen, Shi-Jim
    Shan, Yi-Chang
    IEEE TRANSACTIONS ON COMPUTATIONAL INTELLIGENCE AND AI IN GAMES, 2013, 5 (04) : 346 - 352
  • [32] Monte Carlo Tree Search With Reversibility Compression
    Cook, Michael
    2021 IEEE CONFERENCE ON GAMES (COG), 2021, : 556 - 563
  • [33] Information Set Monte Carlo Tree Search
    Cowling, Peter I.
    Powley, Edward J.
    Whitehouse, Daniel
    IEEE TRANSACTIONS ON COMPUTATIONAL INTELLIGENCE AND AI IN GAMES, 2012, 4 (02) : 120 - 143
  • [34] State Aggregation in Monte Carlo Tree Search
    Hostetler, Jesse
    Fern, Alan
    Dietterich, Tom
    PROCEEDINGS OF THE TWENTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2014, : 2446 - 2452
  • [35] On Monte Carlo Tree Search and Reinforcement Learning
    Vodopivec, Tom
    Samothrakis, Spyridon
    Ster, Branko
    JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2017, 60 : 881 - 936
  • [36] Monte Carlo Tree Search with Robust Exploration
    Imagawa, Takahisa
    Kaneko, Tomoyuki
    COMPUTERS AND GAMES, CG 2016, 2016, 10068 : 34 - 46
  • [37] Multiple Pass Monte Carlo Tree Search
    McGuinness, Cameron
    2016 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC), 2016, : 1555 - 1561
  • [38] Text Matching with Monte Carlo Tree Search
    He, Yixuan
    Tao, Shuchang
    Xu, Jun
    Guo, Jiafeng
    Lan, YanYan
    Cheng, Xueqi
    INFORMATION RETRIEVAL, CCIR 2018, 2018, 11168 : 41 - 52
  • [39] Classification of Monte Carlo Tree Search Variants
    McGuinness, Cameron
    2016 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC), 2016, : 357 - 363
  • [40] Monte Carlo Tree Search with Boltzmann Exploration
    Painter, Michael
    Baioumy, Mohamed
    Hawes, Nick
    Lacerda, Bruno
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,