Q-Learning model for selfish miners with optional stopping theorem for honest miners

被引:0
|
作者
Rakkini, M. J. Jeyasheela [1 ]
Geetha, K. [1 ]
机构
[1] SASTRA Deemed Univ, Sch Comp, Tiruchirappalli 620014, India
关键词
difficulty adjustment algorithms; gambler ruin; honest mining; prediction; reinforcement learning; selfish mining;
D O I
10.1111/itor.13359
中图分类号
C93 [管理学];
学科分类号
12 ; 1201 ; 1202 ; 120202 ;
摘要
Bitcoin, the most popular cryptocurrency used in the blockchain, has miners join mining pools and get rewarded for the proportion of hash rate they have contributed to the mining pool. This work proposes the prediction of the relativegain of the miners by machine learning and deep learning models, the miners' selection of higher relativegain by the Q-learning model, and an optional stopping theorem for honest miners in the presence of selfish mining attacks. Relativegain is the ratio of the number of blocks mined by selfish miners in the main canonical chain to the blocks of other miners. A Q-learning agent with & epsilon;-greedy value iteration, which seeks to increase the relativegain for the selfish miners, that takes into account all the other quintessential parameters, including the hash rate of miners, time warp, the height of the blockchain, the number of times the blockchain was reorganized, and the adjustment of the timestamp of the block, is implemented. Next, the ruin of the honest miners and the optional stopping theorem are analyzed so that the honest miners can quit the mining process before their complete ruin. We obtain a low mean square error of 0.0032 and a mean absolute error of 0.0464 in our deep learning model. Our Q-learning model exhibits a linearly increasing curve, which denotes the increase in the relativegain caused by the selection of the action of performing the reorganization attack.
引用
收藏
页码:3975 / 3998
页数:24
相关论文
共 50 条
  • [31] Q-learning of the storage function in Economic Nonlinear Model Predictive Control
    Kordabad, Arash Bahari
    Gros, Sebastien
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 116
  • [32] Discrete Platoon Control at an Unsignalized Intersection Based on Q-learning Model
    Qian L.
    Chen C.
    Chen J.
    Chen X.
    Xiong C.
    Qiche Gongcheng/Automotive Engineering, 2022, 44 (09): : 1350 - 1358+1385
  • [33] Nash Q-learning agents in Hotelling's model: Reestablishing equilibrium
    Vainer, Jan
    Kukacka, Jiri
    COMMUNICATIONS IN NONLINEAR SCIENCE AND NUMERICAL SIMULATION, 2021, 99
  • [34] Implementation of Fuzzy Q-Learning Based on Modular Fuzzy Model and Parallel Structured Learning
    Watanabe, Toshihiko
    2009 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC 2009), VOLS 1-9, 2009, : 1338 - 1344
  • [35] Combining Model-Based Q-Learning With Structural Knowledge Transfer for Robot Skill Learning
    Deng, Zhen
    Guan, Haojun
    Huang, Rui
    Liang, Hongzhuo
    Zhang, Liwei
    Zhang, Jianwei
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2019, 11 (01) : 26 - 35
  • [36] Model-free optimal chiller loading method based on Q-learning
    Qiu, Shunian
    Li, Zhenhai
    Li, Zhengwei
    Zhang, Xinfang
    SCIENCE AND TECHNOLOGY FOR THE BUILT ENVIRONMENT, 2020, 26 (08) : 1100 - 1116
  • [37] A Deep Q-Learning Model for Sequential Task Offloading in Edge AI Systems
    Liu, Dong
    Gu, Shiheng
    Fan, Xinyu
    Zheng, Xu
    Intelligent and Converged Networks, 2024, 5 (03): : 207 - 221
  • [38] Improving Nonlinear Model Predictive Control Laws via Implicit Q-Learning
    Alhazmi, Khalid
    Sarathy, S. Mani
    IFAC PAPERSONLINE, 2023, 56 (02): : 10027 - 10032
  • [39] The Modified Model of Q-learning Search Strategy Based on LDA-DBN
    Zhu, Shihao
    Zhang, Guangfeng
    Zhao, Dongfan
    PROCEEDINGS OF THE 2016 2ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND INDUSTRIAL ENGINEERING (AIIE 2016), 2016, 133 : 11 - 14
  • [40] Incremental Learning Framework for Autonomous Robots Based on Q-Learning and the Adaptive Kernel Linear Model
    Hu, Yanming
    Li, Decai
    He, Yuqing
    Han, Jianda
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2022, 14 (01) : 64 - 74