Strategic two-sample test via the two-armed bandit process

被引:1
|
作者
Chen, Zengjing [1 ,2 ]
Yan, Xiaodong [2 ,3 ]
Zhang, Guodong [1 ]
机构
[1] Shandong Univ, Sch Math, Jinan 250100, Shandong, Peoples R China
[2] Shandong Univ, Zhongtai Secur Inst Financial Studies, Jinan 250100, Shandong, Peoples R China
[3] Shandong Univ, Sch Math, 27 Shanda Nanlu, Jinan 250100, Shandong, Peoples R China
基金
中国国家自然科学基金;
关键词
SAMPLE-SIZE;
D O I
10.1093/jrsssb/qkad061
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
This study aims to improve the power of two-sample tests by analysing whether the difference between two population parameters is larger than a prespecified positive equivalence margin. The classic test statistic treats the original data as exchangeable, while the proposed test statistic breaks the structure and proposes employing a two-armed bandit process to strategically integrate the data and thus a strategy-specific test statistic is constructed by combining the classic CLT with the law of large numbers. The developed asymptotic theory is investigated by using nonlinear limit theory in a larger probability space and relates to the 'strategic CLT' with a clearly defined density function. The asymptotic distribution demonstrates that the proposed statistic is more concentrated under the null hypothesis and less concentrated under the alternative than the classic CLT, thereby enhancing the testing power. Simulation studies provide supporting evidence for the theoretical results and portray a more powerful performance when using finite samples. A real example is also added for illustration.
引用
收藏
页码:1271 / 1298
页数:28
相关论文
共 50 条
  • [21] A TWO-SAMPLE TEST
    Moses, Lincoln E.
    [J]. PSYCHOMETRIKA, 1952, 17 (03) : 239 - 247
  • [22] Finite-time lower bounds for the two-armed bandit problem
    Kulkarni, SR
    Lugosi, G
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2000, 45 (04) : 711 - 714
  • [23] Two-Armed Bandit Problem and Batch Version of the Mirror Descent Algorithm
    Kolnogorov, A., V
    Nazin, A., V
    Shiyan, D. N.
    [J]. AUTOMATION AND REMOTE CONTROL, 2022, 83 (08) : 1288 - 1307
  • [24] Two-Armed Bandit Problem and Batch Version of the Mirror Descent Algorithm
    A. V. Kolnogorov
    A. V. Nazin
    D. N. Shiyan
    [J]. Automation and Remote Control, 2022, 83 : 1288 - 1307
  • [25] Self-efficacy beliefs and imitation: A two-armed bandit experiment
    Innocenti, Stefania
    Cowan, Robin
    [J]. EUROPEAN ECONOMIC REVIEW, 2019, 113 : 156 - 172
  • [26] Parallel Version of the Mirror Descent Algorithm for the Two-Armed Bandit Problem
    Kolnogorov, Alexander
    Shiyan, Dmitry
    [J]. PROCEEDINGS OF THE 3RD INTERNATIONAL CONFERENCE ON MATHEMATICS AND COMPUTERS IN SCIENCES AND IN INDUSTRY (MCSI 2016), 2016, : 241 - 245
  • [27] A Bayesian Learning Automaton for Solving Two-Armed Bernoulli Bandit Problems
    Granmo, Ole-Christoffer
    [J]. SEVENTH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, PROCEEDINGS, 2008, : 23 - 30
  • [28] Demystifying the Two-Armed Futurity Bandit's Unfairness and Apparent Fairness
    Liang, Huaijin
    Ma, Jin
    Wang, Wei
    Yan, Xiaodong
    [J]. MATHEMATICS, 2024, 12 (11)
  • [29] Two-armed silicon
    Robert West
    [J]. Nature, 2012, 485 : 49 - 50
  • [30] Bees in two-armed bandit situations: foraging choices and possible decision mechanisms
    Keasar, T
    Rashkovich, E
    Cohen, D
    Shmida, A
    [J]. BEHAVIORAL ECOLOGY, 2002, 13 (06) : 757 - 765