Strategic two-sample test via the two-armed bandit process

被引:1
|
作者
Chen, Zengjing [1 ,2 ]
Yan, Xiaodong [2 ,3 ]
Zhang, Guodong [1 ]
机构
[1] Shandong Univ, Sch Math, Jinan 250100, Shandong, Peoples R China
[2] Shandong Univ, Zhongtai Secur Inst Financial Studies, Jinan 250100, Shandong, Peoples R China
[3] Shandong Univ, Sch Math, 27 Shanda Nanlu, Jinan 250100, Shandong, Peoples R China
基金
中国国家自然科学基金;
关键词
SAMPLE-SIZE;
D O I
10.1093/jrsssb/qkad061
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
This study aims to improve the power of two-sample tests by analysing whether the difference between two population parameters is larger than a prespecified positive equivalence margin. The classic test statistic treats the original data as exchangeable, while the proposed test statistic breaks the structure and proposes employing a two-armed bandit process to strategically integrate the data and thus a strategy-specific test statistic is constructed by combining the classic CLT with the law of large numbers. The developed asymptotic theory is investigated by using nonlinear limit theory in a larger probability space and relates to the 'strategic CLT' with a clearly defined density function. The asymptotic distribution demonstrates that the proposed statistic is more concentrated under the null hypothesis and less concentrated under the alternative than the classic CLT, thereby enhancing the testing power. Simulation studies provide supporting evidence for the theoretical results and portray a more powerful performance when using finite samples. A real example is also added for illustration.
引用
收藏
页码:1271 / 1298
页数:28
相关论文
共 50 条
  • [41] AutoML Two-Sample Test
    Kuebler, Jonas M.
    Stimper, Vincent
    Buchholz, Simon
    Muandet, Krikamol
    Schoelkopf, Bernhard
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [42] On two-sample McNemar test
    Xiang, Jim X.
    [J]. JOURNAL OF BIOPHARMACEUTICAL STATISTICS, 2016, 26 (02) : 217 - 226
  • [43] On the two-armed bandit problem with non-observed poissonian switching of arms
    Donchev, Doncho S.
    [J]. Mathematical Methods of Operations Research, 47 (03): : 401 - 422
  • [44] Finding minimax strategy and minimax risk in a random environment (the two-armed bandit problem)
    Kolnogorov, A. V.
    [J]. AUTOMATION AND REMOTE CONTROL, 2011, 72 (05) : 1017 - 1027
  • [45] A cognitively inspired heuristic for two-armed bandit problems: The loosely symmetric (LS) model
    Oyo, Kuratomo
    Takahashi, Tatsuji
    [J]. 17TH ASIA PACIFIC SYMPOSIUM ON INTELLIGENT AND EVOLUTIONARY SYSTEMS, IES2013, 2013, 24 : 194 - 204
  • [46] Finding minimax strategy and minimax risk in a random environment (the two-armed bandit problem)
    A. V. Kolnogorov
    [J]. Automation and Remote Control, 2011, 72 : 1017 - 1027
  • [47] Basal ganglia preferentially encode context dependent choice in a two-armed bandit task
    Garenne, Andre
    Pasquereau, Benjamin
    Guthrie, Martin
    Bioulac, Bernard
    Boraud, Thomas
    [J]. FRONTIERS IN SYSTEMS NEUROSCIENCE, 2011, 5
  • [49] Optimal hysteresis for a class of deterministic deteriorating two-armed Bandit problem with switching costs
    Dusonchet, F
    Hongler, MO
    [J]. AUTOMATICA, 2003, 39 (11) : 1947 - 1955
  • [50] ON ERGODIC TWO-ARMED BANDITS
    Tarres, Pierre
    Vandekerkhove, Pierre
    [J]. ANNALS OF APPLIED PROBABILITY, 2012, 22 (02): : 457 - 476